ReEnTrust: Rebuilding and Enhancing Trust in Algorithms

Lead Research Organisation: University of Oxford
Department Name: Computer Science

Abstract

As interaction on online Web-based platforms is becoming an essential part of people's everyday lives and data-driven AI algorithms are starting to exert a massive influence on society, we are experiencing significant tensions in user perspectives regarding how these algorithms are used on the Web. These tensions result in a breakdown of trust: users do not know when to trust the outcomes of algorithmic processes and, consequently, the platforms that use them. As trust is a key component of the Digital Economy where algorithmic decisions affect citizens' everyday lives, this is a significant issue that requires addressing.

ReEnTrust explores new technological opportunities for platforms to regain user trust and aims to identify how this may be achieved in ways that are user-driven and responsible. Focusing on AI algorithms and large scale platforms used by the general public, our research questions include: What are user expectations and requirements regarding the rebuilding of trust in algorithmic systems, once that trust has been lost? Is it possible to create technological solutions that rebuild trust by embedding values in recommendation, prediction, and information filtering algorithms and allowing for a productive debate on algorithm design between all stakeholders? To what extent can user trust be regained through technological solutions and what further trust rebuilding mechanisms might be necessary and appropriate, including policy, regulation, and education?

The project will develop an experimental online tool that allows users to evaluate and critique algorithms used by online platforms, and to engage in dialogue and collective reflection with all relevant stakeholders in order to jointly recover from algorithmic behaviour that has caused loss of trust. For this purpose, we will develop novel, advanced AI-driven mediation support techniques that allow all parties to explain their views, and suggest possible compromise solutions. Extensive engagement with users, stakeholders, and platform service providers in the process of developing this online tool will result in an improved understanding of what makes AI algorithms trustable. We will also develop policy recommendations and requirements for technological solutions plus assessment criteria for the inclusion of trust relationships in the development of algorithmically mediated systems and a methodology for deriving a "trust index" for online platforms that allows users to assess the trustability of platforms easily.

The project is led by the University of Oxford in collaboration with the Universities of Edinburgh and Nottingham. Edinburgh develops novel computational techniques to evaluate and critique the values embedded in algorithms, and a prototypical AI-supported platform that enables users to exchange opinions regarding algorithm failures and to jointly agree on how to "fix" the algorithms in question to rebuild trust. The Oxford and Nottingham teams develop methodologies that support the user-centred and responsible development of these tools. This involves studying the processes of trust breakdown and rebuilding in online platforms, and developing a Responsible Research and Innovation approach to understanding trustability and trust rebuilding in practice. A carefully selected set of industrial and other non-academic partners ensures ReEnTrust work is grounded in real-world examples and experiences, and that it embeds balanced, fair representation of all stakeholder groups.

ReEnTrust will advance the state of the art in terms of trust rebuilding technologies for algorithm-driven online platforms by developing the first AI-supported mediation and conflict resolution techniques and a comprehensive user-centred design and Responsible Research and Innovation framework that will promote a shared responsibility approach to the use of algorithms in society, thereby contributing to a flourishing Digital Economy.

Planned Impact

In terms of knowledge, key communities across AI, computer science and the social sciences will benefit from our research. ReEnTrust will develop new technical insights into the capabilities of advanced AI techniques to support the process of rebuilding trust in online platforms by assisting mediation and conflict resolution. By bringing together techniques from different areas of AI, we will produce novel models that are expected both to provide each of these areas with new applications for their methods, and to open up new avenues for research into trust rebuilding methods.

This core technical work will be embedded in an extensive programme of human factors and Responsible Research and Innovation work that will encompass extensive empirical work with users and stakeholder groups and address the policy and education dimensions of improving the trustability of online algorithm-driven platforms. The results of this work will benefit social science disciplines and applied computing research by providing new methodologies for the responsible design of algorithmically driven systems. They will also benefit government, NGOs, professional and regulatory bodies by providing case studies, design principles, and policy guidelines that can be used to raise awareness and shape their future strategies and activities.

In terms of economic impact, trust rebuilding technologies not only provide important opportunities to de-risk future uses of AI-driven online services, but also open up new directions to exploit untapped business opportunities around socially responsible AI and "trustability services". Work on a "trust index" as specified in our programme of work will be an essential part of this, as it provides an accessible way to communicate our work to industrial stakeholders, but may also open up new business opportunities surrounding trust certification. In order to enable business leaders and future entrepreneurs to benefit from these opportunities, we will use Horizon's network of over 200 commercial partners to disseminate findings and encourage participation in relevant events. We will also explore opportunities for engaging in commercial spin-off activities of our research ourselves.

In terms of societal impact, a major outcome of the project will be new insights on how trustable AI-powered systems should be "collectively engineered" in the future. These outcomes of ReEnTrust will not only improve wellbeing by helping people address and resolve trust breakdown situations, but also provide a mechanism for collective reflection about values and conduct of both platforms and users, which will foster a culture of accountability and shared responsibility. Apart from the reported anxiety and uncertainty, feelings of disempowerment, defeatism, and loss of faith in articulating societal demands through regulatory and legal institutions, there is currently a real threat that loss of trust in algorithms turns into adversarial behaviour toward platforms and their providers. Our research will benefit society at large by offering new solutions to rebuild and enhance trust in AI algorithms that will help prevent the emergence of an "us against them" culture, and thus contribute to a healthy, resilient Digital Economy.
 
Description The ReEnTrust project aims to develop new technologies and understandings to enable user trust rebuilding on algorithm-driven online platforms. Fundamental to the ReEnTrust project is the emphasis of co-creation with all relevant stakeholders and taking a responsible approach for trust-enhancing technology innovation. Therefore, throughout the project, we have ensured an open and inclusive development approach so that stakeholder concerns are embedded in the processes and outcomes of our research. Together with our literature review and close interaction with various user stakeholders, we have made great progress on our core research questions, regarding 1) an understanding of user expectations and requirements of the rebuilding of trust in algorithmic systems; 2) the development of a trust index to track levels of user trust in algorithmic systems, and 3) an understanding of how much user trust can be supported through technological solutions.

To summarise, we have achieved the following key findings over the first 15 months of the project:

(i) Development of two technological tools: 1) An algorithm exploration sandbox tool that assists users in exploring their trust perceptions of algorithms by providing them with algorithm explanations and an ability to explore algorithmic behaviours by filtering, modifying, and querying algorithms; 2) The technical foundations for a trust mediation tool that will enable users to raise trust-related queries and concerns related to AI algorithms in order to influence the behaviour of the algorithmic platform providers accordingly.

(ii) Understanding the effectiveness of our technological tools: 1) we identified the importance of algorithm explanations to elicit users' opinions of their perception of trust; 2) we gained insights into the "breakpoint" where users lose trust in commercial providers' policies, which were taken forward as key factors to assist users' development of trust; 3) we identified the tendency of users to focus on privacy issues when reasoning about the trustworthiness of algorithmic outcomes, which has informed the next stage development of our mediation tool; 4) and finally, we recognised that how and when to embed trust mediation is critical for users to trust a computational mediation approach.

(iii) Building a grounded understanding of trust: 1) Through both literature reviews and stakeholder workshops, we have identified the complexity in attributing a definitive and singular definition to the notion of trust - users often invoke a range of factors such as online privacy concerns, provider reputation, and even worries regarding the dominance of technologies when talking about issues surrounding trust online. 2) Secondly, we have found that users' narratives around 'trust' are often not aimed at algorithms themselves, but focus more on the socioeconomic factors and corporate systems around which the algorithmic platforms operate, and related data privacy and transparency issues. 3) Finally, our work has revealed that there was not 'full' trust or 'full' distrust from users, but a fluid continuum between these notions, which strongly motivates the need to support trust development in a contextualised, task-specific way.

(iv) Quantifying users' level of trust: Breakdown of trust could lead to a feeling of anxiety, uncertainty and disempowerment. Through carefully designed participatory research, we identified factors that younger and older citizens regarded as important for their development of trust in online systems. Starting from this analysis, we developed a Trust Index, which aims to provide a quantitative measurement of users' level of trust in an algorithmic system. In particular, we are interested in observing any changes of study participants' change of trust index score pre- and post-interaction with our trust rebuilding research tools. The index is currently implemented as a 28-item questionnaire that both allows us to measure the effectiveness of our tool and users to explore how their consideration of trust when they are online might affect their behaviour. The index is measured through three aspects: importance (how important is trust to you), belief (how trusting you are concerning online services), and context (how much do you trust specific factors and features of online services). The index is going to be further evaluated as part of a large online study.
Exploitation Route We have taken the project forward through leading and participating in several new funding applications, such as EPSRC recent call on Trustworthy Autonomous Systems hubs and nodes. Prof. Marina Jirokta has been awarded an EPSRC Advanced Fellowship to work on responsible and trustworthy robotics for the future, and Senior Researcher on the team, Dr Jun Zhao, is preparing an EPSRC Early Career Fellowship on a related topic.
Sectors Digital/Communication/Information Technologies (including Software)

URL https://reentrust.org
 
Description The ReEnTrust project aims to produce tools and best practice guidelines for the development of trust rebuilding technologies that are societally acceptable. Building upon our strong connection to regulatory organisations and policy development agencies, and our extensive public engagement experiences, the ReEnTrust project has produced substantial impact in a number of areas. (i) Raise public awareness - From the early stages of the research activity in the project, we identified that the lack of understanding of algorithms by the general public is a key gap in their digital literacy and a major barrier for them to critique the trustworthiness of algorithmic systems. For this, our team has endeavoured to engage continually with the general public on algorithm awareness, e.g. through the event jointly organised with ISOC-UK England in December 2018, the Change Forum in London in February 2019, participation of university open day events, and early-stage technology piloting at the Edinburgh Fringe Festival 2019. - To reach out to a wider range of audiences, the project team has been publishing in various online outlets such as engagement blogs (eNurture; February 2020) and University publications such as Inspired Research at the University of Oxford (December 2019) or Safe Space at the University of Nottingham (2019). - The impact of algorithmic systems on young people is a key focus of our research project. We are keen to bring the latest research findings to young people and children and develop educational materials that are most meaningful for their development of digital literacy in this space. For this, we actively participate in publication engagement and outreach activities, such as university open days (Nottingham open day, Nottingham Ada Lovelace day in September 2019, Oxford University Taster day in October 2019) and local science festivals (Oxford Idea Festival in October 2019). - Another user group we focus on is older citizens aged over 65, who are increasingly more exposed to digital technologies but sometimes equipped with less experience and knowledge. Through carrying out research studies with this age group, we have gained understandings and networks to carry out joint events with organisations such as the University of 3rd Age (2019). (ii) Engagement with specific stakeholder groups -The Ethical Hackathon is a type of hackathon derived from our previous project UnBias, which proved to be a powerful format for engagement to provoke deeper thoughts around how to develop technologies in a more responsible way, by encouraging hackathon participants to consider ethical implications of each of their design choices. In ReEnTrust, we have carried out three successful Ethical Hackathons so far, each one of which has been attended by nearly 50 participants. We are planning another hackathon in May, jointly with various departments in Oxford, including the Business and Law schools, to continue broadening the multidisciplinary impact of our activity. - As algorithmic systems are increasingly prevalent in all sectors in the society, our team members have been invited to present our findings at various events attended by stakeholders from more diverse backgrounds, such as law (Oxford invited talk at Mischon in June 2019) or national policing (Prof Marina Jirotka at Cityfocum in February 2020). Project team members have also been invited to engage in events that are more widely broadcasted, such as BBC Scotland's Love, Life and Algorithms programme or the Royal Institution's Design for Future (February 2020). (iii) Policy engagement To maximise the policy impact of our research findings, we have been providing responses to national inquiries, participating in key steering committee groups, or directly contributing to policy documents. Our unique combination of co-creation with diverse stakeholder groups follows a responsible innovation principle and has generated some of the much-needed empirical evidence for some of the prominent and timely algorithmic trust issues that the government is trying to develop regulatory approaches or provide best practice recommendations for. - Our findings of how algorithm explanations contribute to user trust have been provided as evidence to ICO and ATI's national inquiry about Explaining AI (February 2020). - Our findings of algorithmic profiling and biases and how they impact on young people and children's online experience have been provided as evidence to the national inquiry by the ICO regarding its Draft Code for Age Appropriate Design (May 2019), and to support our joint letter of response (with 5 Rights) to FTC's COPPA Rule. - Members of the project provide direct input to policy development by serving as committee members, such as in the All Party Parliamentary Data Analytics Group (APDAG) or Royal United Services Institutes for Defence and Security Studies, or leading a landscape review on algorithmic bias commissioned by the Centre of Data Ethics and Innovation. (iv) Standard development activities Members of the project work actively in various standardisation groups, collaborating closely with various industrial stakeholders. For example, Dr Koene (Nottingham) has been involved in setting up the IEEE-SA P7003 Standard: Algorithmic Bias Considerations (2017 - Still Active), Dr Zhao (Oxford) and Dr Koene have been invited to join a recent IEEE Standards for Age Appropriate Terms and Conditions (2019 - Still Active), and Prof Rovatsos (Edinburgh) is a member of the Confederation of British Industry AI working group.
First Year Of Impact 2019
Sector Digital/Communication/Information Technologies (including Software)
Impact Types Societal,Policy & public services

 
Description AlgoAware workshop. Ansgar Koene, Brussels
Geographic Reach Europe 
Policy Influence Type Participation in a advisory committee
Impact Contribution to high-level expert discussion with representatives from the European Parliament, the European Commission as well as Civil Society. The recent "State of the Art" report on automated decision-making by AlgorithmWatch (https://www.algoaware.eu/state-of-the-art-report/) was launched on this event. The UnBias project gets mentioned as examples of an initiative by academic research groups to develop Policy and technical tools (page 10), in particular with the UnBias "Fairness Toolkit" as a "clear example". The report also includes a summary of the aims of the UnBias project (page 57) and the elements of the "Fairness Toolkit" (with references to the relevant webpages).
URL https://www.algoaware.eu/2018/11/14/29-jan-2019-automating-society-taking-stock-of-automated-decisio...
 
Description Ansgar Koene participated in a round table discussion for the Royal United Services Institute (RUSI) for Defence and Security Studies, for the Briefing Paper "Data Analytics and Algorithmic Bias in Policing"
Geographic Reach National 
Policy Influence Type Participation in a advisory committee
URL https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/8317...
 
Description CDI Landscape Summary: Bias in Algorithmic Decision-Making cited in RUSI for Defence and Security Studies Briefing Paper
Geographic Reach National 
Policy Influence Type Citation in other policy documents
URL https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/8317...
 
Description Comments on the European Data Protection Board's Guidelines 4/2019 on Article 25 Data Protection by Design and by Default
Geographic Reach Europe 
Policy Influence Type Participation in a national consultation
URL https://nottingham-repository.worktribe.com/preview/3774957/comments_on_edpb_guidelines_on_a_25_dpbd...
 
Description EPSRC - UKRI Artificial Intelligence and Public Engagement workshop
Geographic Reach National 
Policy Influence Type Participation in a advisory committee
 
Description Input of research evidence towards a POSTNote Online Safety Education, UK Parliament
Geographic Reach National 
Policy Influence Type Implementation circular/rapid advice/letter to e.g. Ministry of Health
URL https://researchbriefings.parliament.uk/ResearchBriefing/Summary/POST-PN-0608
 
Description Joint Comments to ICO's Draft Code of Age appropriate design from ReEnTrust Oxford Team
Geographic Reach National 
Policy Influence Type Participation in a national consultation
URL https://ico.org.uk/about-the-ico/ico-and-stakeholder-consultations/age-appropriate-design-a-code-of-...
 
Description Joint letter of response (2ith 5 Rights) to FTC regarding COPPA Rule
Geographic Reach Multiple continents/international 
Policy Influence Type Implementation circular/rapid advice/letter to e.g. Ministry of Health
Impact Children's Online Privacy Protection Act TAGS: Consumer Protection MISSION: Consumer Protection LAW: 15 U.S.C. §§ 6501-6506 LINK: http://uscode.house.gov/view.xhtml?req=granuleid%3AUSC-prelim-title15-section650... This Act protects children's privacy by giving parents tools to control what information is collected from their children online. The Act requires the Commission to promulgate regulations requiring operators of commercial websites and online services directed to children under 13 or knowingly collecting personal information from children under 13 to: (a) notify parents of their information practices; (b) obtain verifiable parental consent for the collection, use, or disclosure of children's personal information; (c) let parents prevent further maintenance or use or future collection of their child's personal information; (d) provide parents access to their child's personal information; (e) not require a child to provide more personal information than is reasonably necessary to participate in an activity; and (f) maintain reasonable procedures to protect the confidentiality, security, and integrity of the personal information. In order to encourage active industry self-regulation, the Act also includes a "safe harbor" provision allowing industry groups and others to request Commission approval of self-regulatory guidelines to govern participating websites' compliance with the Rule. Link to call for public comment by FTC: https://www.regulations.gov/document?D=FTC-2019-0054-0001
URL https://uscode.house.gov/view.xhtml?req=granuleid%3AUSC-prelim-title15-section6501&edition=prelim
 
Description Led work on government commissioned Landscape Summary on Bias in Algorithmic Decision-Making for Centre for Data Ethics and Innovation
Geographic Reach National 
Policy Influence Type Gave evidence to a government review
URL https://www.gov.uk/government/publications/landscape-summaries-commissioned-by-the-centre-for-data-e...
 
Description Member of Steer Committee for the APGDA Policy Connect Report on Trust, Transparency, Tech on Data and Technology Ethics
Geographic Reach National 
Policy Influence Type Participation in a advisory committee
URL https://www.policyconnect.org.uk/appgda/sites/site_appgda/files/report/454/fieldreportdownload/trust...
 
Description ReEnTrust Joint Comments to the ICO and The Turing Institute's Consultation on Explaining AI
Geographic Reach National 
Policy Influence Type Participation in a national consultation
URL https://ico.org.uk/about-the-ico/ico-and-stakeholder-consultations/ico-and-the-turing-consultation-o...
 
Title Artificial database generated for hotel recommendation algorithm scenario (led by the Edinburgh team) 
Description For the development of our research tool prototype, Algorithm Playground, we need to have a large fictional dataset about users, hotels and their booking history. To generate this dataset, we reused data from an existing hotel database, containing 100+ hotels in Paris, and generated thousands of fictional user profiles using Mockaroo, a synthetic data generation tool. 
Type Of Material Database/Collection of data 
Year Produced 2019 
Provided To Others? No  
Impact This dataset has been critical for the development of our prototype platform development and exploration of the impact of algorithm explanations on users' perception of trust. We don't have any external impact yet. 
 
Description Chair of IEEE-SA P7003 Standard: Algorithmic Bias Considerations - Dr Ansgar Koene 
Organisation Institute of Electrical and Electronics Engineers (IEEE)
Country United States 
Sector Learned Society 
PI Contribution I proposed the P7003 standard for Algorithmic Bias Considerations as part of the IEEE Global Initiative for Ethics of Autonomous and Intelligent systems and now chair the IEEE working group for the development of the P7003 Standard for Algorithmic Bias Considerations. The work involves facilitating a monthly conference call, promoting the work to attract working group participants and providing the general outline for the standards documents. Liz Dowthwait from the UnBias/ReEnTrust project is the P7003 secretary, supporting my work by helping with the monthly agenda and conference minutes.
Collaborator Contribution The IEEE Standards Association (IEEE-SA provides the liaison officer to assist with coordination of the P7003 working group with the wider IEEE Standards activities. IEEE-SA also provides the web-conferencing facilities (Join.me) and online document hosting/collaboration space (iMeet). The IEEE Global Initiative has been facilitating media engagement and policy engagement activities (e.g. ACM/IEEE panel on Algorithmic Transparency and Accountability in Washington DC where I was a panel member). From September 30 to October 3rd, IEEE-SA hosted a workshop for the P70xx standards working groups in Berlin (IEEE-SA paid for travel and accommodation).
Impact Dr Ansgar Koene was contacted by the ACM to participate in a panel discussion on Algorithmic Transparency and Accountability in Washington DC on 14th September 2017. https://www.acm.org/public-policy/algorithmic-panel. There was a write-up of the panel discussion, with follow-up interview with Ansgar "IEEE and ACM Collaborations on ATA", published in AI Matters: A Newsletter of ACM SIGAI, on 1 October 2017. Ansgar's role as chair and a summary of the P7003 Standard were mentioned in the article: "The Ethics of Artificial Intelligence for Business Leaders - Should Anyone Care?", in TechEmergence, on 9th December 2017. Ansgar was invited to publish a short paper on the IEEE P7003 activity in IEEE Technology and Society Magazine, "Algorithmic Bias: Addressing Growing Concerns", IEEE Technology and Society Magazine, Volume: 36, Issue: 2, June 2017. [DOI: 10.1109/MTS.2017.2697080]. An interview with Ansgar about P7003 was published in The Institute (The IEEE news source), on 12 September 2017, "Keeping Bias From Creeping Into Code".
Start Year 2017
 
Description ISCO-UK "User Trust" (2018 - Still Active) 
Organisation Internet Society (ISOC)
Country United States 
Sector Charity/Non Profit 
PI Contribution Our collaboration with ISOC-UK was developed in 2018 during our work within the UnBias research project (EP/N02785X/1) when we run two workshop hosted by ISCO-UK. The first workshop was a panel discussion on "Multi-Sided Trust in Multi-Sided platforms" https://unbias.wp.horizon.ac.uk/2018/04/13/isoc-uk-horizon-der-panel-for-multi-sided-trust-on-multi-sided-platforms/ . The second workshop was a demonstration of the UnBias "Fairness Toolkit" with a session using the Awareness Cards https://unbias.wp.horizon.ac.uk/2018/11/23/workshop-on-algorithmic-awareness-building-for-user-trust-in-online-platforms/. More recently ISOC has invited the ReEnTrust team to run an ISOC hosted event (option considered for future engagements).
Collaborator Contribution Invitation to ReEnTrust to do an ISOC hosted event (e.g, workshops).
Impact The collaboration is multi-disciplinary due to the nature of ISOC as a multi-disciplinary membership organisation with members from a wide range of industry, civil-society and academic groups who are interested in an open and free internet.
Start Year 2018
 
Description Participation in CBI AI Working Group - Prof Michael Rovatsos 
Organisation Confederation of British Industry (CBI)
Country United States 
Sector Private 
PI Contribution Provided input to AI ethics guidelines for CBI membership.
Collaborator Contribution Developed AI ethics guidance for their membership.
Impact Further roundtables with CBI and their members and other stakeholders, e.g. Centre for Data Ethics and Innovation.
Start Year 2019
 
Description Participation in the IEEE-SA P2089: Standards for Age Appropriate Terms and Conditions 
Organisation Institute of Electrical and Electronics Engineers (IEEE)
Country United States 
Sector Learned Society 
PI Contribution I was invited to be a member of the IEEE-SA Working Group to develop Standards for Age Appropriate Terms and Conditions by the 5Rights Foundation in late 2019. Mainly, I am a member of the 10-member subgroup, developing an age-appropriate presentation for children and young people. The group involves participants from other research institutions, industrial and third sectors. So far the work has been involving participating in a quarterly conference call, to review existing approaches in support of this goal and identify gaps and new approaches to support the group goal.
Collaborator Contribution The IEEE Standards Association (IEEE-SA) provides the liaison officer to assist with the kick-off of the working group and coordination with the wider IEEE Standards activities. IEEE-SA also provides the web-conferencing facilities (zoom) and online document hosting/collaboration space (iMeet).
Impact The working group is highly multi-disciplinary, including not only academics from design, computer science, children's development and social sciences, but also legal practitioners, policy and public service providers.
Start Year 2019
 
Description Secretary of IEEE-SA P7003 Standard: Algorithmic Bias Considerations 
Organisation Institute of Electrical and Electronics Engineers (IEEE)
Country United States 
Sector Learned Society 
PI Contribution I support the work of the P7003 standard as part of the IEEE Global Initiative for Ethics of Autonomous and Intelligent systems by organising and minuting the monthly working group meetings. The group (EEE working group for the development of the P7003 Standard for Algorithmic Bias Considerations) is chaired by Ansgar Koene, also a member of the UnBias project.
Collaborator Contribution The IEEE Standards Association (IEEE-SA) provides the liaison officer to assist with coordination of the P7003 working group with the wider IEEE Standards activities. IEEE-SA also provides the web-conferencing facilities (WebEx/join.me) and online document hosting/collaboration space (iMeet). The IEEE Global Initiative has been facilitating media engagement and policy engagement activities.
Impact Minutes of all meetings are available at http://sites.ieee.org/sagroups-7003/
Start Year 2017
 
Title Algorithmic Playground 
Description A web-based platform developed to enable citizens to play with algorithm inputs and observe changes in the results. Focused on a hotel booking scenario where different recommender systems algorithms including content-based and collaborative filtering. 
Type Of Technology Webtool/Application 
Year Produced 2019 
Impact This software was used as a key research tool in various engagement activities with users to gather quantiative and qualitative data. The experiments and workshops conducted with it have shaped the methodological direction of technology development in the project. We plan to make a future, revised version of the system available online. 
 
Description Industry Forum roundtable on 'Restoring trust in digital technology' 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Industry/Business
Results and Impact Marina Jirotka was invited to participate in panel discussion to give her views on restoring trust in digital technologies. This was an invitation only event organised by the Industry Forum, a high profile group that seeks to promotes constructive dialogue between public policy makers, industry operating in the UK, and leading commentators. Details of the panel event can be read here http://www.industry-forum.org/event/restoring-trust-in-digital-tech/. Marina introduced ideas about the what it means to trust digital systems and the capacity for responsibility practices that can foster trust. This lead to discussion amongst those present about what kinds of practices can be put in place at the levels of design and policy.
Year(s) Of Engagement Activity 2019
URL http://www.industry-forum.org/event/restoring-trust-in-digital-tech/
 
Description "How does trust affect your experience of the Internet?" 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Public/other audiences
Results and Impact Fifteen participants aged 16-25 years old took part in this activity run at the Nottingham City Council opened to the general public.
It was a hands-on activity aimed to explore young people's online experiences when interacting with algorithm driven platforms. Participants shared their views and experiences and expressed their will to take part in related activities (e.g.,some have became members of the ReEnTrust advisory group).
Year(s) Of Engagement Activity 2019
 
Description "they don't really listen to people". Young people's concerns and recommendations for improving online experiences.Helen Creswick, Liz Dowthwaite, Ansgar Koene, Elvira Perez Vallejos, Virginia Portillo, Monica Cano and Christopher Woodard 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact This presentation highlighted young people (13-17 years old) concerns in relation to online issues of algorithm bias. In particular users' lack of agency and feelings of disempowerment in young people's internet use, exacerbated by their experiences of online terms and conditions.The audience engaged with interesting questions about the applicability of these findings to the adult population.
Researchers were invited to submit an enhanced version of the research paper to a special issue of the Journal of Information, Communication and Ethics in Society, which has been recently accepted for publication but is not available online yet.
Year(s) Of Engagement Activity 2018
URL https://easychair.org/smart-program/ETHICOMP2018/2018-09-24.html#talk:78107
 
Description 'Adult Education in a Digital World'- Helen Creswick and Elvira Perez Vallejos 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Public/other audiences
Results and Impact Event aimed to celebrate 100 years of adult education and to question how the digital age will influence the future of adult education. Attendees were aged 65 years old and over.
Our presentation outlined the challenges that older adults may face in establishing mechanisms that help them to build trust on the Internet. Members of the audience engaged with our research team asking interesting questions and expressing interest in taking part in future related activities
Year(s) Of Engagement Activity 2019
 
Description Algorithm Playground study with hotel booking scenario 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Public/other audiences
Results and Impact We organised a survey of the role of explanations in trust while using a booking website. This experiment uses the Algorithm Playground we implemented, browsing activity was logged on a dedicated database which was analysed in addition to the answers to a questionnaire submitted by around 200 participants. Participant responses guided the subsequent development of further technogical tools and experimental methodology, in particular with a new emphasis on explanations.
Year(s) Of Engagement Activity 2019
 
Description Algorithm, trust and data regulation 
Form Of Engagement Activity Participation in an open day or visit at my research institution
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Industry/Business
Results and Impact A group of 25 Chinese academic, business and government representatives had a one-day research visit to Oxford. I was invited to present the ethics and AI research work carried out in our research team and project. The presentation has inspired many following-up discussions about the process of data regulation and algorithm transparency policy development in the UK and Europe. Participants in the audience mentioned that these are key lessons for them to take on board in relation to the corresponding development in China.
Year(s) Of Engagement Activity 2019
 
Description Algorithmic awareness building for User Trust in online platforms. ISOC-UK England, London 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Third sector organisations
Results and Impact Interactive session aimed at exploring awareness building around the use of algorithms in online platforms, through the use of the UnBias Awareness Cards. Activities resulted in critical and civic thinking for exploring how decisions are made by algorithms, and the impact that these decisions may have on our lives and the lives of others. Participants engaged in interesting discussions and extra decks of cards were requested by some attendees.
Year(s) Of Engagement Activity 2018
URL https://unbias.wp.horizon.ac.uk/2018/11/23/workshop-on-algorithmic-awareness-building-for-user-trust...
 
Description Ansgar Koene featured speaker at Purdue conference: "Policies for Progress" 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Policymakers/politicians
Results and Impact The Purdue Policy Research Centre at Purdue University, held to "Policies for Progress" conference exploring ways to bring together policymakers, industry leaders, not-for-profits, and academics to bring their collective expertise to bear to address Wicked problems."Policies for Progress" was the capstone event for the Breaking Through: Developing Multidisciplinary Solutions to Global Grand Challenges research project funded by The Andrew W. Mellon Foundation.

Experts from four multidisciplinary teams shared findings and results of their groundbreaking work on the Breaking Through research project. Stakeholders who were integrated into these projects discussed the successes, benefits, as well as challenges in partnering with academia.

In recognition of our extensive work on multi-stakeholder engagement in our past and current projects Ansgar Koene was invited to speak about the work we did on the UnBias project, the associated IEEE P7003 Standard for Algorithmic Bias Considerations development and our current activities for ReEnTrust.
Year(s) Of Engagement Activity 2019
URL https://www.purdue.edu/breaking-through/
 
Description Appearance on "Brainwaves: Love, Life, and Algorithms" radio programme, BBC Scotland, February 2019 
Form Of Engagement Activity A broadcast e.g. TV/radio/film/podcast (other than news/press)
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Public/other audiences
Results and Impact Contributed to radio programme that explored impact of algorithms on daily life on national radio with a wide audience reach.
Year(s) Of Engagement Activity 2019
URL https://www.bbc.co.uk/programmes/m0002b1y
 
Description Article " We may not cooperate with friendly machines" in Nature Machine Intelligence, November 2019 
Form Of Engagement Activity A magazine, newsletter or online publication
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact Published review of and commentary on ethical and methodological issues in a recently published Nature paper that explored whether humans are likely to collaborate with AI systems depending on whether they are aware their counterpart is human or not. The authors of the original article acknowledged the critique and this will inform their future methodology.
Year(s) Of Engagement Activity 2019
URL https://www.nature.com/articles/s42256-019-0117-1
 
Description Articles in the Oxford University Computer Science Inspired Research Magazine 
Form Of Engagement Activity A magazine, newsletter or online publication
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact Inspired Research is a twice-yearly newsletter published by the Department of Computer Science, University of Oxford. It is widely circulated to all alumni as well as other general public audience. The newsletter reports the most impactful research outcomes from the department over a 6-month period. Our publication provides a timely summary of our inputs to the ICO Code for Age Appropriate Design and a reflection of the anticipated impact of this new data protection regulation for safeguarding children's online safety - both academic and non-academic audience
Year(s) Of Engagement Activity 2019
URL http://www.cs.ox.ac.uk/inspiredresearch/InspiredResearch-winter2019.pdf
 
Description Can we trust what we see online? Futurum online article 
Form Of Engagement Activity A magazine, newsletter or online publication
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Schools
Results and Impact This article was produced by Futurum, a magazine and online platform aimed at inspiring young people to follow a career in the sciences, research and technology. For more information, teaching resources, and course and career guides, see www.futurumcareers.com
Year(s) Of Engagement Activity 2019
URL https://futurumcareers.com/can-we-trust-what-we-see-online
 
Description Contribution to 100+ Brilliant Women Conference chairing youth panel awards, Oxford 16th of Sept 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Undergraduate students
Results and Impact I was delighted to announce the winners from the schools' competition on AI
Year(s) Of Engagement Activity 2019
 
Description ESRC Annual Festival of Social Sciences, Nottingham 2018 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Public/other audiences
Results and Impact "Do you trust internet platforms?" Participatory and hands-on activity to help internet users to explore their online experiences when interacting with algorithm driven platforms. We run two workshops for people aged 16-25 and 65 years old and over, where a total of 17 participants took part.
This activity constituted a pilot study, part of an interdisciplinary research project between the Universities of Nottingham, Oxford and Edinburgh called ReEnTrust, which explores new technological opportunities to enhance/re-build user's trust, and identify how this may be achieved in ways that are user-driven and responsible. Preliminary data collected from this study has been extremely valuable and has contributed to refine particular research topics within our current project (ReEnTrust). Also, participants from this activity expressed their will to contribute to follow-up interviews and also to be part of the ReEnTrust advisory group.
Year(s) Of Engagement Activity 2018
URL https://www.horizon.ac.uk/reentrust-call-for-participants/
 
Description Ethical Hackathon 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Postgraduate students
Results and Impact An 'ethical hackathon' is a novel event developed by the Human Centred Computing group at the University of Oxford. It takes a twist on the traditional hackathon format by challenging groups of participants to identify ways to embed ethical considerations into the processes of technical design and development. The event in October 2019 involved postgraduate students at Horizon CDT University of Nottingham, in a day long session. At the start of the session leaders Liz Dowthwaite and Menisha Patel introduced key project themes and invited students to discuss issues connected to algorithmic controversies and governance. After initial themes had been introduced and activities completed, students were put into groups and set a design challenge to work on. At the end of the session the teams presented their designs in front a panel of judges. The judges gave feedback on the presentations and prizes were awarded for the best designs.
Year(s) Of Engagement Activity 2019
 
Description Ethical Hackathon (two day- long events- Oxford and Nottingham) 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Postgraduate students
Results and Impact An 'ethical hackathon' is a novel event developed by the Human Centred Computing group at the University of Oxford. It takes a twist on the traditional hackathon format by challenging groups of participants to identify ways to embed ethical considerations into the processes of technical design and development. The UnBias team ethical hackathons in November 2018 and January 2019 involving postgraduate students at Horizon CDT University of Nottingham & the Cyber Security CDT University of Oxford. These took the form of a day long session. At the start of the session leaders Helena Webb and Menisha Patel introduced key project themes and invited students to discuss issues connected to algorithmic controversies and governance. During the Nottingham Ethical Hackathon discussion was also facilitated by Liz Dowthwaite, After initial themes had been introduced and activities completed, students were put into groups and set a design challenge to work on. At the end of the session the teams presented their designs in front a panel of judges from the UnBias team. The judges gave feedback on the presentations and prizes were awarded for the best designs.
Year(s) Of Engagement Activity 2018,2019
 
Description Ethics of AI panel at the Oxford UIDP summit 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Industry/Business
Results and Impact Marina Jirotka was a guest speaker on a panel discussing AI ethics. This was a special panel session run at the 2019 Oxford UIDP summit, which brought together high level staff in industry, academia and policy. As part of the panel she spoke about the RoboTIPS project and opportunities for university-industry-policy collaboration in responsible innovation.
Year(s) Of Engagement Activity 2019
URL https://uidp.org/event/oxford_uidp_summit/
 
Description Festival of Science and Curiosity (FOSAC), Nottingham City Library 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Public/other audiences
Results and Impact Drop-in sessions to introduce people to the UnBias Awareness Cards and to discuss issues of online fairness. This activity was part of a wider outreach event, and run as part of the Impact Exploration Grant Award. We engaged with approximately 50 people, mostly families. Three decks of cards were requested from people working who worked in the Education Sector.
Year(s) Of Engagement Activity 2019
URL https://www.horizon.ac.uk/13229-2/
 
Description IEEE P70XX Working Groups writing sessions 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact 3-Day meeting of delegates from all the IEEE P70xx working group members, to interact with different working groups and carry out intensive writing sessions for the standards.
Year(s) Of Engagement Activity 2019
 
Description Institute for Policy and Engagement launch- University of Nottingham. 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Professional Practitioners
Results and Impact The aim of the event was to raise the profile of the Institute with colleagues, particularly academics. In particular, it was an opportunity for academics colleagues already engaged in policy and public engagement work to share their work with their peers.

Members of the UnBias/ReEnTrust team were invited to be one of the policy impact/public engagement stars of the event, and a slide summarising with our main policy impact/public engagement outputs was on display during the event. Our work was also highlighted by Professor Dame Jessica Corner during her presentation, being one of the 4 policy impact/public engagement stars out of 29 exhibited on the event.
Year(s) Of Engagement Activity 2019
URL http://blogs.nottingham.ac.uk/researchexchange/2019/02/12/institute-for-policy-and-engagement-launch...
 
Description Invited talk Toward Ethical AI (not more AI ethics), Situated Computing and Interaction Lab, University of Muenster, Germany, May 2019 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Postgraduate students
Results and Impact Presented an introductory talk on ethical AI to interdisciplinary group of around 40 postgraduate students and researchers.
Year(s) Of Engagement Activity 2019
 
Description Invited talk at Edinburgh Napier University, Edinburgh, UK, November 2019 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Professional Practitioners
Results and Impact Gave talk on Democratic Self-regulation and Fairness in Future Cyber Societies at School of Computing Seminar to audience of about 20, which led to further discussions on future research collaborations.
Year(s) Of Engagement Activity 2019
 
Description Invited talk at Mishcon de Reya LLP, London 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Industry/Business
Results and Impact An invited presentation at a London law firm, Mishcon de Reya LLP, talking about algorithm and trust. 30-50 legal practitioners were present in the 2-hour event, participating in a practical session and debate about algorithms and trust, particularly from a legal perspective.
Year(s) Of Engagement Activity 2019
 
Description Invited talk at the AI@Oxford 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Industry/Business
Results and Impact I was invited to give a presentation at the track of "Impact of Trust in AI" at AI@Oxford'2019. The talk was attended by more than 50 participants from various sectors. The talk was followed up by several requests for further information from industrial sectors.
Year(s) Of Engagement Activity 2019
URL https://innovation.ox.ac.uk/innovation-news/events/aioxford-conference/conference-agenda/
 
Description Invited talk on Algorithmic Fairness, BHCC Symposium, Sheffield, UK, October 2019 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Gave invited talk in Symposium on Biases in Human Computation & Crowdsourcing to an interdisciplinary audience of around 40 academics and postgraduate students.
Year(s) Of Engagement Activity 2019
URL https://sites.google.com/sheffield.ac.uk/bhcc2019/program
 
Description Invited talk on Ethical Design of AI Systems at Asser Institute Winter Academy AI and International Law, The Hague, The Netherlands 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Gave an introductory talk to ethical AI and AI ethics to legal/public policy audience at international research school as invited speaker.
Year(s) Of Engagement Activity 2020
URL https://www.asser.nl/about-the-institute/asser-today/save-the-date-2020-winter-academy-on-artificial...
 
Description Keynote Speaker to Royal College Psychiatrists Annual Conference. Talk title 'Internet Adiction or Persuasive Design' 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact After the presentation I was invited to contribute to the RCP Report titled 'Technology use and the mental health of children and young people'
Year(s) Of Engagement Activity 2019
 
Description Lets Talk About Tech Campaign video 
Form Of Engagement Activity Engagement focused website, blog or social media channel
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Public/other audiences
Results and Impact Aligning with Children's Mental Health Week (this week) and running up to Internet Safety Day on the 11th February, we are running a campaign to showcase our visionary research around internet safety, algorithms, and the impact on children's mental health and wellbeing. Digital mental health is a growing area of research at Nottingham, and in addition to this campaign supporting REF, it's an area we are looking to secure more funding in. Our research has already influenced policy with the introduction of the Age Appropriate Design Code, and we aim to continue to be influential in this space.
Year(s) Of Engagement Activity 2020
URL https://twitter.com/UoNresearch/status/1224991085656793095?s=20
 
Description Massive Open Online Course on Data Ethics, AI, and Responsible Innovation 
Form Of Engagement Activity Engagement focused website, blog or social media channel
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact Co-led and contributed to content of a new online course that introduces a broad (non-technical and technical) audience to data ethics, AI, and responsible innovation. The course addresses key application domains where algorithmic systems are making important decisions in healthcare, policing, finance, and smart living scenarios, and also provides a practical introduction to basic ethical concepts and frameworks.
Year(s) Of Engagement Activity 2020
URL https://www.edx.org/course/Data-Ethics-AI-and-Responsible-Innovation?utm_source=Data-Ethics-Twitter-...
 
Description Navigating through Uncertainty and Unawarness - Blog for eNurture Network+ 
Form Of Engagement Activity Engagement focused website, blog or social media channel
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Blog to signal Internet Safety Day
Year(s) Of Engagement Activity 2020
URL https://www.enurture.org.uk/blog/2020/2/5/navigating-through-uncertainty-and-unawareness
 
Description Organising/Chairing Panel Discussion at HealTAC conference, Cardiff April 24-25 2019 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact I was asked to organize a Panel Discussion at HealTAC conference titled: Natural language processing in mental health: progress, challenges and opportunities.
Year(s) Of Engagement Activity 2019
 
Description Panel on Digital Markets and Consumer Welfare, Competition and Markets Authority Consumer Detriment Symposium, Edinburgh, September 2019 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Policymakers/politicians
Results and Impact Participated in discussion at event organised by key national regulator, this led to follow-up events and exploration of further collaboration with the CMA.
Year(s) Of Engagement Activity 2019
 
Description Panel on Ethical Implications of AI, CyberUK 2019 Conference, Glasgow, April 2019 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Industry/Business
Results and Impact Contributed to panel at national cybersecurity industry and government event.
Year(s) Of Engagement Activity 2019
URL https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=2ahUKEwiQ_LmQ6oroAhU0pnEKHbu6BBU...
 
Description Panel on Fairness in AI at Canada-UK-France Trilateral AI Ethics Workshop, London, June 2019 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Participated in panel held at Alan Turing Institute to foster trilateral collaboration in ethical AI, which led to further engagement with French and Canadian stakeholders.
Year(s) Of Engagement Activity 2019
URL https://www.turing.ac.uk/events/cifar-ukri-cnrs-ai-society-principles-practice
 
Description Panel on ethical issues around data at "Beyond" conference, Edinburgh, November 2019 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Professional Practitioners
Results and Impact Participated in panel on "Dark Data: Bias, Trust, and Inclusion" in major creative industries conference. This led to future requests for information and engagement from practitioners in the industry.
Year(s) Of Engagement Activity 2019
URL https://beyondconference.org/agenda
 
Description Panel on ethical issues at RBS DataFest industry event, Edinburgh, November 2019 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Industry/Business
Results and Impact Participated in panel on Fairness, Data, and Ethics at annual Royal Bank of Scotland conference, which led to further development of collaboration with them.
Year(s) Of Engagement Activity 2019
 
Description Panel on failures and biases in AI, We Need to Talk About AI Talks, Edinburgh, November 2019 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Postgraduate students
Results and Impact Participated in interdisciplinary panel discussion on biases in AI systems as part of student-led event.
Year(s) Of Engagement Activity 2019
URL https://www.ed.ac.uk/informatics/news-events/public/we-need-to-talk-about-ai/to-err-is-machine
 
Description Participantion in the Oxford Idea Festival 2019 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Public/other audiences
Results and Impact The key goal of the event is to raise the awareness of these issues by the general public by exhibiting in a local shopping centre in East Oxford and reaching out to an audience who would normally be hard to reach otherwise. The exhibition was well-attended by several hundred people on the day.
Year(s) Of Engagement Activity 2019
URL http://if-oxford.com
 
Description Participation in an Ada Lovelace Panel and Q&A session 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Postgraduate students
Results and Impact Took place in a UoN Computer Science Ada Lovelace Day (27.11.2019) - disseminated UnBias Awareness cards among audience and established new contacts for new (ReEnTrust) project
Year(s) Of Engagement Activity 2019
 
Description Pilot survey for the Algorithm Playground 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Public/other audiences
Results and Impact We organised a survey prior to the Algorithm Playground large experiment to test the understandability and the time required to complete the survey. This took place during the Edinburgh Festival so we attended a venue and asked to random people to complete the survey questionnaire.
Year(s) Of Engagement Activity 2019
 
Description Presentation at AI Ethics in the Financial Sector Conference, The Alan Turing Institute, London, July 2019 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Industry/Business
Results and Impact Gave talk to around 80 senior representatives of financial industry, government, and third sector at national meeting organised by the Alan Turing Institute. This led to key contacts for future engagement with the industry.
Year(s) Of Engagement Activity 2019
URL https://www.turing.ac.uk/events/ai-ethics-financial-sector
 
Description Presentation at Royal Institution Designing the Future. 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Industry/Business
Results and Impact On Thursday 13 February, Designing the Future was hosted at the Royal Institution in London, attended by more than 250 guests and broadcasted live. It featured a series of interactive Christmas-lectures-style lectures followed by conversation and debate. Prof Marina was one of the guest lecturers who looked at the future quantum computing.
Year(s) Of Engagement Activity 2020
URL https://hoarelea.com/2020/02/19/designing-the-future-our-national-event/
 
Description Presentation at the Cityforum Policing Summit 2020 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Policymakers/politicians
Results and Impact An invited presentation at the Cityforum Policing the Nation Round Table 2020, to discuss ethical issues related to data sharing, data access and data analytics. The event was attended by 250 attendees from a range of sectors, including government, academia as well as industry.
Year(s) Of Engagement Activity 2020
 
Description Presentation to Institute of Practitioners in Advertising, Edinburgh, December 2018 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Industry/Business
Results and Impact Gave introductory presentation titled "AI and you" to introduce professionals from the advertisement industry to AI and ethical issues in AI.
Year(s) Of Engagement Activity 2019
 
Description Presentation to Scottish Investment Operations on "Demystifying AI", Edinburgh, October 2019 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Industry/Business
Results and Impact Gave a presentation introducing data science, AI, and ethics concepts to do financial industry audience of around 60. Very well received and established longer term collaboration with the organisation.
Year(s) Of Engagement Activity 2019
URL https://www.sio.org.uk/news/sio-event-28-october-demystifying-data
 
Description ReEnTrust Advisory Group 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Public/other audiences
Results and Impact We established the ReEnTrust Advisory Group (AG) aimed to the co-creation of research activities/resources within this project.
It is composed of a total of twelve people, eight aged between 16-25 years old and four aged 65 and over. In 2019 we had 3 AG meetings before running the first set (wave 1) of research workshops, which allowed good co-creation with the AG and fine tuning details of the scenarios used on the wave 1 workshops.
Members' contribution has been extremely useful with participants sharing lots of very relevant points that were all incorporated in the follow write up of scenarios used on our research workshops. Even more, one young member offered to do the mock ups of the screen shots used on those scenarios.
We are currently planing wave 2 workshops and have run an initial brainstorm session with the AG and will be running more co-creation sessions in 2020.
Year(s) Of Engagement Activity 2019
 
Description Responsible Innovation panel AI@Oxford conference 2019 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Industry/Business
Results and Impact Marina Jirotka was invited to talk on an expert panel at the AI@Oxford conference 2019. The panel discussed opportunities to embed responsible innovation into the development of AI supported technologies. Marina Jirotka spoke about the RoboTIPS project as an instance of a project that seeks to foster responsibility in technology across phases of design, development and implementation.
Year(s) Of Engagement Activity 2019
URL https://innovation.ox.ac.uk/innovation-news/events/aioxford-conference/conference-agenda/
 
Description STEM ambassador 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Public/other audiences
Results and Impact I am a STEM ambassador as part of the scheme run by STEM Learning. I take part in STEM engagement events under this scheme.
Year(s) Of Engagement Activity 2018,2019,2020
 
Description The Human Bias in AI. Change Forum: Product and Data, London 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Media (as a channel to the public)
Results and Impact Interactive workshop mainly aimed to media industry stakeholders. The aim of the session was to build understanding and promote discussion around the use of algorithms in online platforms. Members of our team run activities using the UnBias Awareness Cards, designed to encourage critical and civic thinking for exploring how decisions are made by algorithms, and the impact that these decisions may have on our lives and the lives of others. The audience engaged in interesting discussions and the facilitators from our research team were invited to take part in future related activities (RCUK workshops).
Year(s) Of Engagement Activity 2019
URL https://www.changeforum.co/
 
Description The Internet and You - widening participation 
Form Of Engagement Activity Participation in an open day or visit at my research institution
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Public/other audiences
Results and Impact This was an opportunity to showcase our work 'the internet and you' at an event organised by the University of Nottingham - our audience was mainly primary school aged children
Year(s) Of Engagement Activity 2019
 
Description Trust breaching workshop with hotel booking scenario 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Public/other audiences
Results and Impact The workshops from the first wave involved a trust breaching task, whose purpose was to make participants book a hotel with the fake booking website we implemented and check at which point they realise it is fake, leading them to lose trust. This experiment helped us to identify trust break points and the impact of different commercial policies on them.
Year(s) Of Engagement Activity 2019
 
Description Tutorial on AI Ethics at International Joint Conference on AI (IJCAI 2019), Macau, Aug 2019 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Presented a three-hour introductory tutorial at one of the largest international AI conferences to a mostly technical research/professional/student audience. This was attended by around 60 people.
Year(s) Of Engagement Activity 2019
URL https://www.ijcai19.org/tutorials.html
 
Description Tutorial on AI Ethics, Advanced Course on AI (ACAI 2019), Chania, Greece, July 2019 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact Gave four-hour introductory tutorial to an audience of around 40 at international research school.
Year(s) Of Engagement Activity 2019
URL http://acai2019.tuc.gr/?page_id=489
 
Description University of Nottingham Vision magazine - Safe Space 
Form Of Engagement Activity A magazine, newsletter or online publication
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Media (as a channel to the public)
Results and Impact Safe space
Researchers are placing the views and experiences of young people at the heart of policy, making the internet a safer place

This publication is for the purpose of disseminating research conducted at the University of Nottingham to a wider - both academic and non academic audience
Year(s) Of Engagement Activity 2019
URL https://www.nottingham.ac.uk/vision/safe-space
 
Description Vision - Safe Space 
Form Of Engagement Activity A magazine, newsletter or online publication
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Public/other audiences
Results and Impact Interview (by Dr Elvira Perez) and subsequent article for University of Nottingham Vision Magazine to highlight the present and historical research activity addressing protecting young people and children online (placing the views and experiences of young people at the heart of policy, making the internet a safer place).
Year(s) Of Engagement Activity 2019
URL https://www.nottingham.ac.uk/vision/safe-space
 
Description Visit to DCMS 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Policymakers/politicians
Results and Impact Researches at the University of Nottingham were invited, together with CDT students, to DCMS to provide presentations and expert advice on:

Monetising Services - Joseph Hubbard-Bailey, Kate Green
• How do companies providing a 'free' service make money?
• How is data monetised?
• How do cookies and advertising work?

Moderating online services - Ansgar Koene, Elvira Perez Vallejos, Liz Dowthwaite
• How does China prevent access to sites?
• How does AI (AI and other things like machine learning) work? How is it used to moderate content?
• How does the process of trusted flaggers work?

Encryption - Derek McAuley
• How does encryption, including end-to-end encryption affect the ability to identify harmful content?
• Where are online services going in terms of encryption and what risks does this bring?
• What is DOH and how might it impact the ability of companies/regulators to moderate content?
Year(s) Of Engagement Activity 2019
 
Description Workshop to 3rd Age University 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Third sector organisations
Results and Impact We provided an overview of Horizon research which included hands on activities to approximately 30 members of the 3rd Age University
Year(s) Of Engagement Activity 2019