Developing Trust in Algorithmically Driven Services by Enhancing Explainable and Fair Machine Learning Through Co-Creation and Customer Interactions

Lead Research Organisation: Swansea University
Department Name: College of Science

Abstract

Aims and Intended Impact
Research Questions:
To what extent can co-creation methods with end-users be used to innovate FairML/XAI?
How does interaction design influence the trust end-user's have in FairML and/or XAI algorithm decision making.
What improvements do such innovations bring from an end-user and Financial Institution point of view.

Outcomes and contributions of the work:
Understanding of state of the art and issues to be addressed
Methods for user-centered FairML/XAI
FairML/XAI innovations evaluated in lab and deployed settings

Evaluating and designing FairML practices is difficult for three broad reasons: First, arriving at a suitable definition of fairness is hard - great thinkers have debated about this for centuries - even in focused pragmatic contexts there are often complexities and value-based tensions [1]; Second, developing a data-driven algorithm that performs fairly across society has been shown to be the exception rather than the norm, recent years have been littered with many examples of how this can go wrong [2]-[5]; Third, the deployment, uptake and impact on the end-user of such automated decision systems is too often a secondary consideration or retrospectively appraised, if at all.

PhD yearly plan outline
In the first year, the work will involve the PhD researcher immersing themselves in the state-of-the art in terms of interfaces and interactions with FairML and XAI algorithms etc. In
addition, they will carry out primary research by surveying a range of existing fintech services to document and characterise the approaches currently deployed. This work will further
involve focus groups, workshops, surveys with customers and potential customers to understand any perceptions they have of bias/ lack of fairness etc.
Year two will begin with a series of investigations that attempt to draw out potential ways of improving the fairness and justice of the relationship being built between FairML approaches
to services and customers: this work would attempt to tease out the extent to which people believe services are treating them fairly and without bias. This might involve a range of empirical techniques such as: wizard-of-oz where users interact with an automated service (which is in fact being controlled by a human) and where they are proactively asked about whether they understand and trust decisions/ interactions (and how the service could improve it); prototypes that expose various components of FairML models or XAI explanations of service decisions to end-users; communities to supplement answers/ decisions to be contextualized; a-b style experiments where users responses are analysed and compared etc.
Year three, will allow for robust follow-up, iteration and refinement of the experimental work carried out in year two with further exposure to the real world-context. Year 1 and year 2 will enable the PhD researcher to gather a range of data that could be then used to consider patterns that surface key factors that lead to performance improvements and increased
perception of bias or fairness. If successful, the PhD will be able to then demonstrate a suite of tools and/or design approaches that organizations can use to uncover these nuanced biases and use them to assess/ improve their models. Year 1 and 2 can carry out a pilot experiment to test this hypothesis with Year 3 being used for a more substantial test and full write up of the thesis.

Planned Impact

The Centre will nurture 55 new PhD researchers who will be highly sought after in technology companies and application sectors where data and intelligence based systems are being developed and deployed. We expect that our graduates will be nationally in demand for two reasons: firstly, their training occurs in a vibrant and unique environment exposing them to challenging domains and contexts (that provide stretch, ambition and adventure to their projects and capabilities); and, secondly, because of the particular emphasis the Centre will put on people-first approaches. As one of the Google AI leads, Fei-Fei Li, recently put it, "We also want to make technology that makes humans' lives better, our world safer, our lives more productive and better. All this requires a layer of human-level communication and collaboration" [1]. We also expect substantial and attractive opportunities for the CDT's graduates to establish their careers in the Internet Coast region (Swansea Bay City Deal) and Wales. This demand will dovetail well with the lifetime of the Centre and provide momentum for its continuation after the initial EPSRC investment.

With the skills being honed in the Centre, the UK will gain a important competitive advantage which will be a strong talent based-pull, drawing in industrial investment to the UK as the recognition of and demand for human-centred interactions and collaborations with data and intelligence multiplies. Further, those graduates who wish to develop their careers in the academy will be a distinct and needed complement to the likely increased UK community of researchers in AI and big data, bringing both an ability to lead insights and innovation in core computer science (e.g., in HCI or formal methods) allied to talents to shape and challenge their research agenda through a lens that is human-centred and that involves cross-disciplinarity and co-creation.

The PhD training will be the responsibility of a team which includes research leaders in the application of big data and AI in important UK growth sectors - from health and well being to smart manufacturing - that will help the nation achieve a positive and productive economy. Our graduates will tackle impactful challenges during their training and be ready to contribute to nationally important areas from the moment they begin the next steps of their careers. Impact will be further embedded in the training programme with cohorts involved in projects that directly involve communities and stakeholders within our rich innovation ecology in Swansea and the Bay region who will co-create research and participate in deployments, trials and evaluations.

The Centre will also impact by providing evidence of and methods for integrating human-centred approaches within areas of computational science and engineering that have yet to fully exploit their value: for example, while process modelling and verification might seem much removed from the human interface, we will adapt and apply methods from human-computer interaction, one of our Centre's strengths, to develop research questions, prototyping apparatus and evaluations for such specialisms. These valuable new methodologies, embodied in our graduates, will impact on the processes adopted by a wide range of organisations we engage with and who our graduates join.

Finally, as our work is fully focused on putting the human first in big data and intelligent systems contexts, we expect to make a positive contribution to society's understandings of and involvement with these keystone technologies. We hope to reassure, encourage and empower our fellow citizens, and those globally, that in a world of "smart" technology, the most important ingredient is the human experience in all its smartness, glory, despair, joy and even mundanity.

[1] https://www.technologyreview.com/s/609060/put-humans-at-the-center-of-ai/

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/S021892/1 01/04/2019 30/09/2027
2440759 Studentship EP/S021892/1 01/10/2020 30/09/2024 Alexander Blandin