Understanding Fairness in Data-Driven AI for Mental Health

Lead Research Organisation: University of Sheffield
Department Name: Information School

Abstract

RESEARCH PROBLEM
The aims of the research study will be to understand the ethical issues relevant to the introduction of data-driven AI in mental health and to develop fairness measures for the design of an AI application aimed at mitigating inequalities in NHS mental health services. The study's research objectives will be
a. To understand stakeholders' ethical concerns about the design development and consequences of applying data-driven AI in mental health.
b. To consider the extent to which the stakeholders' ethical concerns are generalizable to the population of mental health service providers and users.
c. To develop enhanced fairness measures for the design of an AI application aimed at mitigating inequalities in NHS mental health services.

BACKGROUND
Taking injustice, fairness and the development of fairness measures as its guiding goals, the study will enhance existing knowledge about data-driven AI in mental health care by conducting a mixed-methods study that grounds ideas of what is fair data-driven AI in the lived experiences and perspectives of its relevant participants, i.e. ethnic minority service users, information officers, clinicians. This will be principally accomplished via i) the conduct of qualitative interviews with users of mental health services from ethnic minority backgrounds, with information officers, and with clinicians ii) The design and implementation of a quantitative survey, informed by the qualitative findings, that considers the generalizability of the results to a broader population of service providers and users in mental healthcare iii) The development of fairness criteria and measures on the basis of the results of the qualitative and quantitative results.

METHODS
The project will be conducted via a combination of qualitative and quantitative research. The intent will be to draw on the qualitative interview data and the quantitative survey data to develop measures for testing the fairness of data-driven AI models for informing decision-making in mental healthcare.

PUBLIC ENGAGEMENT AND IMPACT
Our key motivation for undertaking the research is to explore the ethical role of fairness when developing data-driven AI to support medical decision-making in mental healthcare. This entails the development of fairness measures and a use case that is anchored in the perspectives and experiences of relevant stakeholders, including services users from ethnic minority backgrounds, information professionals, and clinicians. Given this motivation and commitment, we will be seeking to engage in conversations that explore the prejudices of each of these stakeholder groups towards the data, algorithmic, and doctor-patient decision-making aspects and consequences of adopting data-driven AI. As well as the role of fairness and other ethical aspects related to fairness e.g. equal participation, equal opportunity for patients to contribute knowledge. Via Deliberative Workshops we will seek to facilitate the negotiation and balancing of these different perspectives.

Publications

10 25 50
 
Description An evidence review on understanding fairness in data-driven AI for mental healthcare has been conducted. This has aided in scoping the area including clinical applications of AI for mental health, understanding the biases
that can occur including data bias, algorithmic bias, clinician bias, and how the risks of bias that can arise can and should be mitigated against. Following an exploration of ethical principles of fairness, measures identified for the discovery and mitigation of bias include the development and application of
fairness measures and metrics for evaluating task performance, conducting fairness audits, and giving attention to AI governance and regulation.

Following HRA approval, and with our partner Sheffield and Health and Social Care NHS Trust, we have currently arranged a number of focus group conversations with Clinicians and with Information Officers in April 2023 in order to further explore
the topics identified in the evidence review, and the related clinical, ethical and governance issues, relevant to evaluating the introduction of data-driven AI within a practical and institutional context of delivering mental healthcare.
Exploitation Route The ways in which the outcomes of the research might be taken forwards include: the design and evaluation of ethically AI applications in mental healthcare,
understanding the institutional context of application, involving further stakeholder groups (e.g. clinicians, information officers, mental health service users)
who will benefit from the introduction of data-driven AI in mental healthcare but who are also concerned about the associated risks, and how these might be
mitigated against and governed.
Sectors Digital/Communication/Information Technologies (including Software),Healthcare

 
Description It is too early to say, but we anticipate that our focus groups with clinicians and with information officers in April 2023 will contribute to conceptual change and an increase in understanding among these groups as to what an ethical and accountable approach to the design of data-driven AI in mental healthcare consists in.
First Year Of Impact 2023
Sector Digital/Communication/Information Technologies (including Software),Healthcare
Impact Types Policy & public services

 
Description Sheffield Health and Social Care 
Organisation Sheffield Health and Social Care NHS Foundation Trust
Country United Kingdom 
Sector Public 
PI Contribution Subsequent to Health Research Authority approval of our ethics application we have coordinated the scheduling of focus groups within clinicians and information officers for April 2023. In contributing to the collaboration we have drawn on our reviewing of relevant evidence, together with approved study protocol, invitation letters, information sheets and consent forms.
Collaborator Contribution Advertising and recruitment of clinicians and information officers for focus groups in April 2023.
Impact We anticipate an empirical paper subject to the delivery of the focus groups in April 2023. The research is multi-disciplinary involving information management, mental healthcare, health data science, and computer science.
Start Year 2022