Ensuring the benefits of AI in healthcare for all: Designing a Sustainable Platform for Public and Professional Stakeholder Engagement

Lead Research Organisation: University of Oxford
Department Name: Law Faculty

Abstract

Japan and the UK are both investing heavily in national programmes to accelerate the implementation of AI in healthcare delivery (1,2,3). The perceived benefits of this technology include increased efficiacy and cutting costs by using software and algorithms to 'identify patterns too subtle to be detected by human observation, and to use those patterns to generate accurate insights and inform better decision making (4,5). While the use of AI in healthcare promises to be transformative, its ability to approximate conclusions without direct human input using large datasets of personal information, raises a number of concerns about responsibility, transparency and accountability, and public acceptance (6,7).

The use of AI in clinical settings challenges the principles that underpin healthcare delivery: the patient's trust in a system governed by professional codes of conduct; the belief that the primary concern of healthcare is to safeguard well-being, and the subsequent legal and regulatory safety nets that have evolved (8). Expert reports have identified a number of issues surrounding the use of AI, such as: privacy concerns associated with access to clinical data; how it might influence people's legal right to know how decisions about them are made; and the possibility of discrimination of vulnerable populations due to database and algorithm implicit biases (8,9). In the UK, concerns have already been expressed about whether access to publicly-generated data by AI companies might erode public trust in the National Health Service (NHS), potentially jeopardising wide-spread adoption of AI and undermining the basic tenets of healthcare practice (1).

For these reasons, a number of reports have recommended that patients and wider public, who will be most impacted by these technologies, are central to the future development of AI and involved in the design, implementation and governance of software for healthcare, as part of a co-design process (8), that will improve success rates and enhance patient-centred care. To date, very few studies have focused on public concerns regarding AI implementation, or the best strategies for engaging not just with patients, but healthcare professionals and wider publics. What is needed is an engagement platform to support a sustained dialogue with a broad range of stakeholders, so the implementation of AI is in accordance with changing public concerns and for the benefit of humanity.

This research will be situated in research hospitals in the UK and Japan that are pioneering the use of AI systems, to allow us to identify effective models for sustained dialogue and engagement that can inform policy recommendations for future use of AI in healthcare. We will co-design an innovative programme of research to elicit the views of stakeholders, including patients and the public regarding: 1) the current and anticipated use of AI in treatment, diagnostic decision-making and precision medicine 2) the issues that stakeholders perceive will influence the adoption and implementation of AI in healthcare; 3) the types of engagement mechanisms, safeguards and regulatory controls they would like to see in place; and 4) how to develop a platform for engagement that can address issues of trust, responsibility, accountability and transparency, and influence normative practice. This will provide insights for both countries, leading to better sustained public dialogue, as well as informing global policy-making.

Planned Impact

Our research will have considerable impact as it will identify effective models for sustained dialogue and engagement that will benefit a range of stakeholders by bringing different constituents together to stimulate the creation of an interdisciplinary, multi-sectoral ecosystem.

Innovators and industry - beneficiaries include non-academic researchers, technologists and firms (SMEs and large companies) with an interest in developing, translating and commercialising AI to infer health-relevant information and products and services derived from them. Research findings will have impact by i) informing early stage development of AI technologies by providing insight into the needs and valuation practices of a variety of stakeholders and ii) enabling emergent business models to take into account public views, that will be directly relevant to product design. The mapping of the implementation landscape in the UK and Japan will also provide valuable guidance on the legal and ethical requirements and responsibilities of service providers.

Policy and regulation - beneficiaries will include agencies and regulatory actors responsible for the governance of AI and healthcare systems; lawyers and data protection officers involved in the regulation of personal information online; policy makers promoting digital innovation in healthcare and the potential social and economic benefits of data-driven health research, and politicians and activists involved in public debates about personal health data. Research findings will have an impact by i) providing an evidence-base on the social and organisational implications of current research trajectories, translational pathways and challenges to implementation of AI in health, including the valuation practices that are applicable for different applications of AI, ii) Highlighting significant dimensions of the development and implementation pipeline of AI systems for health, particularly relating to data needed, to inform policy and regulatory decision-making, iii) Informing policy and legislative consideration of the governance of online health-relevant information and its use, iv) Sharing findings and best practice between the countries involved (UK and Japan), highlighting priorities across each country and outlining transnational differences in public preference and legal frameworks, v) Providing an evidence base about the public, legal and ethical underpinnings of a trustworthy and robust governance system for digital sharing of sensitive information.

HCPs and managers - HCPs and health service managers will be represented in expert workshops and data generation phases of the project. Research findings will have an impact by i) ensuring that development of AI systems for healthcare reflect the needs and values of HCPs, ii) encouraging HCP participation in debate about the value and challenges of integrating AI systems to healthcare, iii) informing HCPs about the values at stake when health data is shared with or received from external developers of AI, and the challenges associated with implementation of AI systems. The project will also have impact on HCPs indirectly, through increased public awareness and discussion of the benefits and challenges of utilising data collected outside of healthcare treatment and diagnosis.

Patients, citizens and publics - beneficiaries include patients, charities and patient/citizen organisations. Recruited patients and citizens of both countries will share their perspectives on the use of health-related data in the empirical research. Research findings will impact by i) ensuring that development of AI systems for healthcare reflect the needs and values of patients and citizens variously located in society, ii) informing and encouraging public participation in debate about the value and challenges of integrating AI systems to healthcare and more broadly on permitting data collected outside of healthcare to be used in treatment and diagnosis.
 
Description Landscaping work was conducted on the development and use of AI in healthcare and medicine using analysis of Twitter discourse, discussions with experts, and a scoping literature review. This identified AI technologies being developed, trialled, and implemented in Oxford and Osaka University Hospitals, providing an overview of the status of AI integration and allowing us to source stakeholders to include later in the study, identify dissemination channels, understand motivations to adopt AI in different settings, and evaluate how the design, development, and deployment of AI is being/ could be governed. Results were triangulated and underpin topics selected for discussion and scenario design for subsequent research.

Current activities were found to focus predominantly on image analysis, diagnostics, and predicting serious events in healthcare, and included implementation into radiology and ultrasound specialties, as well as research into algorithms to make outputs more clinically significant. A few AI initiatives had built networks/ consortia between industry, research, and medicine to generate infrastructure for research and implementation of AI technologies for specific clinical applications. Technical and practical concerns included data quality issues, highlighted a need for appropriate evaluation mechanisms, and called for involvement of the clinical community in designing/trialling machine learning tools to identify workflow issues. Results indicated high levels of AI investment and vast amounts of related activities such as conferences, networking events, and healthcare AI research outcomes; highlighted the importance of healthcare workforce readiness, including a need for training in AI; and demonstrated concerns about AI replacing professionals alongside opinions negating this anxiety. There were some stances that AI could promote proactive and predictive delivery of healthcare, however, these were countered by uncertainty about changing roles of healthcare professionals, and doubts about the claimed economic and workflow efficiencies to be had from implementing AI. Negative sentiments captured concerns about the adoption of AI in the absence of reliable evaluation and due consideration of the impact on, and preferences of, those entities in society that it may impact ethically and socially.

Working with clinicians leading local development and implementation of AI tools in Oxford and Osaka and with locally assembled Panels of public involvement ambassadors, researchers developed a case-study scenarios and interview schedules for conducting public focus groups and interviews with healthcare professionals. These were also used for engagement and involvement activities designed to promote public debate and co-design of future research. Together these activities identified a translational divide between the development of AI systems and clinical deployment and readiness. We conclude that a more human-centred approach to AI implementation in healthcare is needed and one that is inclusive of a range of stakeholders. We recommend the development of strategies for improving transparency around the implementation, data needs, and use of AI systems in healthcare and mechanisms for promoting a sustained societal dialogue around the use AI in healthcare. We identify four key areas to support stakeholder involvement that would enhance the development, implementation, and evaluation of AI in healthcare leading to greater levels of trust. These are: 1) aligning AI development practices with social values 2) appropriate and proportionate involvement of stakeholders, 3) understanding the importance of trusted AI, 4) embedding stakeholder-driven governance.
Exploitation Route The research has developed exemplar case studies that will be instructive for healthcare providers, industry, policymakers, educators and regulators seeking ways of developing engagement, and illustrated the value of including patient voices in the development of research approaches. In particular the project is one of the first to integrate such activities in Japan. Findings will i) inform early stage development of AI technologies by offering insight into the needs and valuation practices of a variety of stakeholders, with evidence on the social/organisational implications of research trajectories, translational pathways and challenges to implementation of AI in health; ii) enable emergent business models to take into account public views directly relevant to product design, as well as the needs and values of HCPs and challenges they perceive in regarding AI integration; iii) highlight significant dimensions of the development and implementation pipeline of AI systems for health, particularly relating to information needed to inform policy/regulatory decision-making, iv) inform policy and legislative consideration of the application/governance of AI in health, v) describe transnational differences in public/professional preferences, vi) promote the values stakeholder participation in debates about the values/challenges of integrating AI to healthcare.
Sectors Communities and Social Services/Policy

Digital/Communication/Information Technologies (including Software)

Education

Healthcare

Government

Democracy and Justice

Pharmaceuticals and Medical Biotechnology

URL https://aideproject.web.ox.ac.uk/
 
Description The AIDE project brought together diverse stakeholders to discuss the development and use of AI technologies in healthcare settings and for health and wellbeing outside of these settings. A project specific Patient and Public Involvement Panel (PPIP) met regularly alongside researchers to consider ethical, societal and regulatory issues associated with the use of AI in healthcare. They provided invaluable contributions to ensuring the accessibility of project approaches, including the design of case study scenarios and question schedules, as well as reviewing analytical approaches and contributing to the identification and development of future research questions. Building on this experience, CoI Shah hosted workshops with colleagues at University College London to improve patient and public engagement around AI in medicine. These activities brought AI research to underserved communities who seldom engage with research involving translation of novel technologies to improve healthcare outcomes. Further engagement activities were conducted as part of the ESRC Festival of Social Sciences (2022), targeting school aged children and their families which highlighted a widespread interest amongst members of the public in learning about the use AI in healthcare. Additional funds were secured to: 1 - support a programme of involvement and engagement activities to explore public perceptions, needs and values through online engagement sessions and an in-person workshop (May-June 2023). Public workshop participants contributed to the co-design of governance tools and future research approaches. 2 - explore research findings with subject experts, healthcare managers, industry experts and community leaders through an expert workshop. This provided the opportunity to promote visibility of project activities; learn about governmental policy direction in the area and local governance approaches; and nurture a regional network of experts, several of whom have since contributed to follow-on funding applications. The workshop culminated in the establishment of a dedicated discussion and workspace on the NHSFutures platform to continue discussions and foster further collaboration. These two small projects were delivered in unison, complementing one another's delivery and allowing cultivation and sharing of knowledge and ideas across disciplinary and societal divides. Public engagement and involvement activities also benefitted from establishing a strong working relationship with the Oxford Health Biomedical Research Centre's Diversity in Research Group, who worked with us to ensure inclusion of lesser-heard demographics and foster an inclusive approach. These activities are helping us to develop research networks build connections with underserved communities for future research and involvement in AI translation. They have led to 4 funding submissions for follow on research and establishment of the Oxford Network for Sustainable and Trustworthy AI in Health and Care (OXSTAI).
First Year Of Impact 2019
Sector Healthcare
Impact Types Cultural

Societal

Policy & public services

 
Description In depth quantitative analysis of Twitter data collected through the AIDE project, for secondary research purposes (above and beyond the scope of the current research programme)
Amount £2,159 (GBP)
Funding ID Law Faculty Strategic Support Fund 2021-31 
Organisation University of Oxford 
Sector Academic/University
Country United Kingdom
Start 09/2021 
End 04/2022
 
Description Knowledge exchange for responsible research and implementation of AI in healthcare
Amount £4,939 (GBP)
Funding ID 0013054 
Organisation University of Oxford 
Sector Academic/University
Country United Kingdom
Start 01/2023 
End 06/2023
 
Description Public and Community Engagement Fund
Amount £5,995 (GBP)
Funding ID 2022/PSF/12885 
Organisation University of Oxford 
Sector Academic/University
Country United Kingdom
Start 01/2023 
End 06/2023
 
Description Collaboration on follow on research proposals 
Organisation Oxford University Hospitals NHS Foundation Trust
Country United Kingdom 
Sector Academic/University 
PI Contribution Discussion of AIDE project objectives and emerging research ideas
Collaborator Contribution Discussion of AIDE project objectives and emerging research interests with experts in AI implementation and stakeholder involvement have led to collaboration on research proposals to UKRI and other funders. Using case studies of ML assisted image analysis tools at different stages of implementation we will examine and inform the sociotechnical factors and processes impacted by AI innovation in healthcare. Within the academic and clinical ecosystem of the Oxford University Hospitals we will develop an innovative implementation framework to support evidence-based decision making at each stage of an AI's deployment, considering governance needs and mechanisms for stakeholder involvement, education and representation, with a view to informing national policy on AI implementation in health. Our multidisciplinary approach combining social science and legal analysis with implementation theory will identify the needs of diverse stakeholders, including key decision makers and those impacted by changes in clinical practice, to support a responsible and trusted transition to intelligent healthcare. A wider network of partners will ensure relevance to other areas of innovation as AI adoption evolves.
Impact n/a
Start Year 2023
 
Description In Theatre Project - People's views about the use of artificial intelligence in surgery 
Organisation Wellcome / EPSRC Centre for Interventional and Surgical Sciences
Country United Kingdom 
Sector Public 
PI Contribution I was a member of the advisory board for this project. I contributed to the design of the patient and public-facing workshops about artificial intelligence technologies in surgery. Working with an artist, we co-designed the content for discussion in a workshop that used poetry to understand issues of bowel cancer from people with lived experience. The poetry from the workshop was displayed in the project's final art installation culminating from several other patient workshops on using AI in surgery. This was an exhibition in East London. Further to this, I was invited to an expert workshop to build a policy briefing on AI in surgery and tackling inequalities in this context.
Collaborator Contribution Connecting us to underserved communities in East London to engage with them about artificial intelligence and wider systemic issues in healthcare that affect them. Using a novel approach of working with artists to create visual outputs from discussions with members of the public was a successful way to engage with communities that were not familiar with artificial intelligence. This served as a way to inform members of the public about applications of AI for surgery, and also obtain their views and concerns or perceived benefits. The partnership connected us to other academic researchers in AI for healthcare, and we invited one of them to present at our HeLEX research group webinar series on AI in healthcare.
Impact An interactive pop-up installation which explored how AI and robotics will revolutionise surgery now and in the future - multidisciplinary involving clinicians, social scientist, engineers, and artists.
Start Year 2022
 
Description Oxford Network for Sustainable and Trustworthy AI in Health and Care 
Organisation University of Oxford
Department Ethox Centre
Country United Kingdom 
Sector Academic/University 
PI Contribution Networks built during the AIDE project stimulated the creation of this cross-Oxford initiative, with selected individuals invited to speak during workshops and events.
Collaborator Contribution AIDE project reserachers brought together researchers from accross Oxford to collaborate on a strategic bid for an ESRC Centre on responsible AI in Healthcare. Although unsuccessful the network has been maintained, supported by funds held by the Ethox Centre, Nuffield Department for Population Health. It runs regular seminars and workshops to foster collaboration amongst Oxford reserchers interested in the development and application of AI technologies in healthcare.
Impact Submitted Grant application: ESRC CENTRE FOR RESPONSIBLE AND SUSTAINABLE RESEARCH AND TRANSLATION OF AI IN HEALTHCARE - unsuccessful at internal institutional review stage.
Start Year 2022
 
Description AIDE Project Oxford Public and Patient Involvement Panel 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Patients, carers and/or patient groups
Results and Impact A core aspect of the research project is that it is co-designed with patients and members of the public, as well as key industry and academic partners. In 2020, we recruited a panel of expert patients to assist the research team in understanding key issues relating to artificial intelligence in healthcare that would potentially impact patients and the public. This Public and Patient Involvement Panel (PPIP) is made up of 6 members located in the London and Oxfordshire regions. The panel's role in the project is to help shape the research agenda from a patient's and citizen perspective, review study materials, and identify and connect the researchers with stakeholders to recruit to the project's empirical studies. The panel has met with the researchers 4 times in 2020, the first to induct them to the project, and the last three meetings consisted of virtual knowledge building workshops, where academics and industry experts in the filed of AI discussed with the panel members use cases of AI in healthcare, provided examples of how AI may shape healthcare delivery, overview of regulation of AI, and understanding social and ethical issues relating to privacy, agency, and bias and discrimination. Feedback from these workshops was generally positive in terms of the content and knowledge gained, however, the virtual nature of the delivery was sometimes tiring for participants despite scheduling breaks, and sometimes technical difficulties stifled group work, which would have been better in face-to-face meetings.

In year 2021 two co-design workshop were held with the panel to discuss what conditions need to be in place for trustworthy AI in healthcare, key issues the research needs to investigate from a patients/public perspective, and ideas for recruiting participants to the focus groups and interviews study. The panel also reviewed and fed back on the qualitative study topic guides, scenarios for each stakeholder group to be recruited.
Year(s) Of Engagement Activity 2020,2021,2022
 
Description AIDE Project Symposium, Osaka 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Other audiences
Results and Impact Presentation of project results and discussion of findings at a hybrid symposium inviting attendance from patients and participants, members of the public, academics, health care practitioners, Policy experts and Funders. Presentations sparked question and discussion around the use of artificial intelligence in healthcare and governance and regulatory needs.
Year(s) Of Engagement Activity 2023
 
Description AIDE(UK) Project Advisory Board 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Professional Practitioners
Results and Impact The project UK advisory board meets on a 6 monthly basis to discuss substantive and in-depth aspects of the research, offer expertise to inform decision making and steer project delivery. They are also a valuable resource for connecting the project team with key stakeholders and advising on related activity, helping us to make connections and identify recruitment pathways for empirical research.
Year(s) Of Engagement Activity 2020,2021,2022,2023
 
Description Blog post on Festival of social sciences 
Form Of Engagement Activity Engagement focused website, blog or social media channel
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact This blog reported on our activities at the ESRC Festival of Social Sciences and was published on our own web pages, through our Faculty Blog, and through the organisers media channels.
Year(s) Of Engagement Activity 2022
URL https://blogs.law.ox.ac.uk/blog-post/2022/12/esrc-festival-science-pitt-rivers-museum
 
Description Involving patients in research. Presentation by Professor Beverley Yamamoto 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Over three sessions, this event considered data strategies to predict risk, prevent and manage disease in individuals and populations, and examine the assistive technologies that may arise from these.

The symposium brought together representatives from academia, industry and the healthcare sector in the UK and
Japan to explore this challenge in the context of:
1 - The health data landscape and resources in the UK and Japan
2 - Health data for public health
3 - Health data for clinical decision-making

Professor Yamamoto spoke in the second session
Year(s) Of Engagement Activity 2020
URL https://acmedsci.ac.uk/file-download/11475041
 
Description Patient group workshop 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Patients, carers and/or patient groups
Results and Impact A group of patients and carers recruited through a cancer charity attended three workshops on the theme of Artificial Intelligence in Bowel Cancer. An AIDE project Co-Investigator collaborated with the charity and another academic institution to hold the workshops aimed to pilot an engagement project to understand perspectives about AI in cancer screening. The group were invited to immerse themselves in an interactive set of workshops that involved working with an artist to capture concerns, benefits, and feelings of workshop delegates. The content focused on real examples of machine learning techniques to detect cancer, and discuss implications of implementing these in practice as well as the social and ethical issues associated with AI in healthcare. The workshops piloted the artistic element which culminated in a final end of project artwork depicting the group's interactions.
Year(s) Of Engagement Activity 2021
 
Description Patient voices help our research investigating how to enable AI for all 
Form Of Engagement Activity Engagement focused website, blog or social media channel
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Other audiences
Results and Impact Blog post through the Centre for Health, Law and Emerging Technology Centre's website, entitled Patient voices help our research investigating how to enable AI for all, 05 May 2-21. Provides an overview of project involvement/engagement activity.
Year(s) Of Engagement Activity 2021
URL https://www.law.ox.ac.uk/helex/blog/2021/05/patient-voices-help-our-research-investigating-how-enabl...
 
Description Project installation at the ESRC Festival of Social Sciences, Pitt Rivers Museum, Oxford 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Public/other audiences
Results and Impact Project based activities at the ESRC Festival of Social Science, Pitt Rivers Museum, Oxford. Our focus group scenarios were presented as interactive activities targetted towards primary school aged children which sparked questions and discussions with children, parents and other adults, who reported increased interest in the subject area. VIsitors were also invited to join our project mailist. 58 children and 58 adults attended our exhibit.
Year(s) Of Engagement Activity 2022
URL https://blogs.law.ox.ac.uk/blog-post/2022/12/esrc-festival-science-pitt-rivers-museum
 
Description Responsible AI in Healthcare - Themed series in the Law and Technology webinar series, University of Oxford Faculty of Law 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Other audiences
Results and Impact Responsible AI in healthcare seminar series, March 2022. The series is being run through the Law and Technology Research Group, a large and diverse community of University postholders, researchers, research associates, academic affiliates, academic visitors and students with a shared interest in the legal, regulatory and governance issues raised by new technologies. Interests span from CRISPR and assisted reproductive technologies right through to COVID track and trace systems, Big Data analytics, AI and machine learning to renewable energy sources. The group is concerned with the way that new technologies are impacting every aspect of our daily lives, the communities we live in and our future societies. The seminar series is advertised through HeLEX academic networks and that of Research Group members, attracting a wide range of predominantly academic attendees from around the globe.
Year(s) Of Engagement Activity 2022
URL https://www.law.ox.ac.uk/helex/blog/2021/05/patient-voices-help-our-research-investigating-how-enabl...