ReEnTrust: Rebuilding and Enhancing Trust in Algorithms

Lead Research Organisation: University of Oxford
Department Name: Computer Science

Abstract

As interaction on online Web-based platforms is becoming an essential part of people's everyday lives and data-driven AI algorithms are starting to exert a massive influence on society, we are experiencing significant tensions in user perspectives regarding how these algorithms are used on the Web. These tensions result in a breakdown of trust: users do not know when to trust the outcomes of algorithmic processes and, consequently, the platforms that use them. As trust is a key component of the Digital Economy where algorithmic decisions affect citizens' everyday lives, this is a significant issue that requires addressing.

ReEnTrust explores new technological opportunities for platforms to regain user trust and aims to identify how this may be achieved in ways that are user-driven and responsible. Focusing on AI algorithms and large scale platforms used by the general public, our research questions include: What are user expectations and requirements regarding the rebuilding of trust in algorithmic systems, once that trust has been lost? Is it possible to create technological solutions that rebuild trust by embedding values in recommendation, prediction, and information filtering algorithms and allowing for a productive debate on algorithm design between all stakeholders? To what extent can user trust be regained through technological solutions and what further trust rebuilding mechanisms might be necessary and appropriate, including policy, regulation, and education?

The project will develop an experimental online tool that allows users to evaluate and critique algorithms used by online platforms, and to engage in dialogue and collective reflection with all relevant stakeholders in order to jointly recover from algorithmic behaviour that has caused loss of trust. For this purpose, we will develop novel, advanced AI-driven mediation support techniques that allow all parties to explain their views, and suggest possible compromise solutions. Extensive engagement with users, stakeholders, and platform service providers in the process of developing this online tool will result in an improved understanding of what makes AI algorithms trustable. We will also develop policy recommendations and requirements for technological solutions plus assessment criteria for the inclusion of trust relationships in the development of algorithmically mediated systems and a methodology for deriving a "trust index" for online platforms that allows users to assess the trustability of platforms easily.

The project is led by the University of Oxford in collaboration with the Universities of Edinburgh and Nottingham. Edinburgh develops novel computational techniques to evaluate and critique the values embedded in algorithms, and a prototypical AI-supported platform that enables users to exchange opinions regarding algorithm failures and to jointly agree on how to "fix" the algorithms in question to rebuild trust. The Oxford and Nottingham teams develop methodologies that support the user-centred and responsible development of these tools. This involves studying the processes of trust breakdown and rebuilding in online platforms, and developing a Responsible Research and Innovation approach to understanding trustability and trust rebuilding in practice. A carefully selected set of industrial and other non-academic partners ensures ReEnTrust work is grounded in real-world examples and experiences, and that it embeds balanced, fair representation of all stakeholder groups.

ReEnTrust will advance the state of the art in terms of trust rebuilding technologies for algorithm-driven online platforms by developing the first AI-supported mediation and conflict resolution techniques and a comprehensive user-centred design and Responsible Research and Innovation framework that will promote a shared responsibility approach to the use of algorithms in society, thereby contributing to a flourishing Digital Economy.

Planned Impact

In terms of knowledge, key communities across AI, computer science and the social sciences will benefit from our research. ReEnTrust will develop new technical insights into the capabilities of advanced AI techniques to support the process of rebuilding trust in online platforms by assisting mediation and conflict resolution. By bringing together techniques from different areas of AI, we will produce novel models that are expected both to provide each of these areas with new applications for their methods, and to open up new avenues for research into trust rebuilding methods.

This core technical work will be embedded in an extensive programme of human factors and Responsible Research and Innovation work that will encompass extensive empirical work with users and stakeholder groups and address the policy and education dimensions of improving the trustability of online algorithm-driven platforms. The results of this work will benefit social science disciplines and applied computing research by providing new methodologies for the responsible design of algorithmically driven systems. They will also benefit government, NGOs, professional and regulatory bodies by providing case studies, design principles, and policy guidelines that can be used to raise awareness and shape their future strategies and activities.

In terms of economic impact, trust rebuilding technologies not only provide important opportunities to de-risk future uses of AI-driven online services, but also open up new directions to exploit untapped business opportunities around socially responsible AI and "trustability services". Work on a "trust index" as specified in our programme of work will be an essential part of this, as it provides an accessible way to communicate our work to industrial stakeholders, but may also open up new business opportunities surrounding trust certification. In order to enable business leaders and future entrepreneurs to benefit from these opportunities, we will use Horizon's network of over 200 commercial partners to disseminate findings and encourage participation in relevant events. We will also explore opportunities for engaging in commercial spin-off activities of our research ourselves.

In terms of societal impact, a major outcome of the project will be new insights on how trustable AI-powered systems should be "collectively engineered" in the future. These outcomes of ReEnTrust will not only improve wellbeing by helping people address and resolve trust breakdown situations, but also provide a mechanism for collective reflection about values and conduct of both platforms and users, which will foster a culture of accountability and shared responsibility. Apart from the reported anxiety and uncertainty, feelings of disempowerment, defeatism, and loss of faith in articulating societal demands through regulatory and legal institutions, there is currently a real threat that loss of trust in algorithms turns into adversarial behaviour toward platforms and their providers. Our research will benefit society at large by offering new solutions to rebuild and enhance trust in AI algorithms that will help prevent the emergence of an "us against them" culture, and thus contribute to a healthy, resilient Digital Economy.
 
Title ReEnTrust: Algorithms and Us 
Description The ReEnTrust project was a follow up to the UnBias project and Horizon underpinning research (CaSMa) - a collaboration between the Human Centred Computing group at the University of Oxford, the School of Informatics at the University of Edinburgh, and HORIZON Digital Economy Research at the University of Nottingham. It focused on rebuilding and enhancing trust in algorithms. This video is part of the projects dissemination of results, unpacking some of the mystery in the algorithms encountered in every day life. 
Type Of Art Film/Video/Animation 
Year Produced 2022 
Impact The video has been widely shared to support raising awareness and a greater understanding of 'algorithms' 
URL https://www.youtube.com/watch?v=BBuMsw5E_E0
 
Title UnBias AI for Decision Makers Toolkit 
Description AI for Decision Makers toolkit . AI4DM is published under the same Creative Commons license as before UnBias Toolkit (Attribution, Non Commercial, Share Alike). The "DIY version" is open and free to download and make up with commonly available stationery . There is a physical box set of cards which can be ordered "print on demand" just like the previous set too: https://www.makeplayingcards.com/sell/marketplace/unbias-ai-for-decision-makers.html Dr Ansgar is aiming to talk about the toolkit at various industry orientated conferences during 2020 - TBC - potentials being: International Conference on Robotics, Automation & Artificial intelligence Systems Conference https://icraais.com/ MyData2020 https://online2020.mydata.org/ Digital Leaders Week https://week.digileaders.com/speakers/ansgar-koene/ 
Type Of Art Artefact (including digital) 
Year Produced 2020 
Impact Production of this toolkit was commissioned to Proboscis for Ernst Young under CC BY SA licence and launched by Proboscis: https://www.indiegogo.com/projects/unbias-ai-for-decision-makers/ The toolkit is open source and launched on Sept 15th with a crowdfunding campaign to raise funds to manufacture a print run and make the kit more affordable that the print-on-demand option (through scale of economy). The campaign offers the original UnBias Fairness Toolkit and thus increases the impact of this to reach new audiences who may not have had access to it previously. 
URL https://www.makeplayingcards.com/sell/marketplace/unbias-ai-for-decision-makers.html
 
Description The ReEnTrust project aims to develop new technologies and understandings to enable user trust rebuilding on algorithm-driven online platforms. Fundamental to the ReEnTrust project is the emphasis on co-creation with all relevant stakeholders and taking a responsible approach for trust-enhancing technology innovation. Therefore, throughout the project, we have ensured an open and inclusive development approach so that stakeholder concerns are embedded in the processes and outcomes of our research. Over the last year, we have made great progress on our core research questions, regarding 1) an understanding of user expectations and requirements of the rebuilding of trust in algorithmic systems; 2) the development of a trust index to track levels of user trust in algorithmic systems, and 3) an understanding of how much user trust can be supported through technological solutions.

To summarise, we have achieved the following key findings over the last year:

(i) Development of two new technological tools: 1) a newer version of an algorithm exploration sandbox tool in a new, more critical application context --- online job application decision making; and 2) a new trust mediation tool that aims to facilitate users to develop trust of algorithmic systems in a news feed aggregator application context based on novel mediation techniques and users' inputs co-created with users.

(ii) Understanding the effectiveness of our technological tools: based on stakeholders' and our advisory board members' feedback, we explored and compared users' trust perception in a more critical decision-making context and a less critical context. We confirmed our previous findings regarding the importance of algorithm explanations to elicit users' opinions of trust. We also confirmed that in addition to explanations users require increased transparency, such as more technical information about the platform (security, user data management etc), reliability of the inputs, developers' experiences etc. However, we also noticed that trust is highly contextual: in a more critical scenario, people showed a stronger preference for algorithms based on their functions.

(iii) Quantifying users' level of trust: There are no existing measures for online wellbeing, despite the internet being a ubiquitous presence in our lives. ReEnTrust has been working to create an online wellbeing scale that covers both psychological and subjective well-being. This year we followed up with a larger online study (n=300) with a refined scale and found that overall internet users experience more positive well-being than negative. We also found some interesting relationships between trust and wellbeing, such that more trusting people experience higher levels of wellbeing, which has important implications. This online wellbeing scale prototype is in the process of being further tested.


(iv) The importance of a Responsible Innovation Approach: Finally, we finalised our data collection regarding the application of the RRI approach in our project to ensure that our trust rebuilding systems are socially acceptable and reflect users' values. This has led to a fruitful outcome of a practical methodology of applying RRI in a bespoke process within a cross-disciplinary research team and critical lessons for the RRI community regarding the success and future challenges.
Exploitation Route We have taken the project forward through leading and participating in several new funding applications, such as EPSRC recent call on Trustworthy Autonomous Systems hubs and nodes. Prof. Marina Jirokta has been awarded an EPSRC Advanced Fellowship to work on responsible and trustworthy robotics for the future, and Senior Researcher on the team, Dr Jun Zhao, has submitted an EPSRC Early Career Fellowship on a related topic.
Sectors Digital/Communication/Information Technologies (including Software)

URL https://reentrust.org
 
Description The ReEnTrust project aims to produce tools and best practice guidelines for the development of trust rebuilding technologies that are societally acceptable. Building upon our strong connection to regulatory organisations and policy development agencies, and our extensive public engagement experiences, the ReEnTrust project has produced a substantial impact in several areas even though the outbreak of COVID globally has brought significant negative impact on the variety and quantity of activities that we could carry out over the last year. (i) Raise public awareness and wellbeing: We have been limited by the chances of directly engaging with the general public this year. However, our team members have continued to engage in various school outreach and department open day virtual events, raising young people's awareness of related issues. For example, we have engaged with over 100 young people to explore the trustworthiness of specific online resources aiming to increase mental health literacy, as well as raising awareness of the impact that social media can have in young peoples' mental health. Furthermore, participants in our user studies also benefited from increased awareness of algorithms in major areas such as online shopping or critical decision making. Participants have described an increased awareness of the function and impact of algorithms that they have not anticipated previously and pointed out algorithmic explanations have been useful resources for them to raise awareness and understanding. The Ethical Hackathon is a type of hackathon derived from our previous project UnBias, which proved to be a powerful format for engagement to provoke deeper thoughts around how to develop technologies in a more responsible way, by encouraging hackathon participants to consider ethical implications of each of their design choices. In ReEnTrust, we have carried out three successful Ethical Hackathons during our first year. Although we were planning to continue these productive events, we had to redesign their format due to the lockdown situation. Driven by the demand from several student organisations in Oxford, our team organised the first virtual workshop on Responsible Innovation. Nearly 20 postgraduate students or postdoctoral researchers from a variety of departments at Oxford University participated and actively engaged in the 2-hour long virtual workshop, discussing the implications of responsible innovation in today's AI-based algorithmic systems. A responsible innovation approach, with regular cross-project workshops, foregrounded societal concerns and interaction between team members of different backgrounds or disciplines - for many of these RI was new, so this workshop provided a valuable knowledge transfer opportunity for young researchers. The workshop has led to fruitful reflections regarding the issues and challenges related to the application of RRI in real applications. In December 2021 we released a video titled Algorithms and Us, capturing our research findings of how the effects of algorithms are perceived and calling for more support of transparency and user autonomy aimed at a general public audience. To date we have had over 100 views from the general public, and over 1000 impressions on Twitter (and counting). The same video was also hosted on the Department's website to continue engageing with the public. We will continue to seek other opporunities for promotion of the video. (ii) Engagement with diverse stakeholder groups - This year, we have carried out 27 new engagement activities. We continued our existing activities such as (virtual) invited seminars to exchange our research findings with other academic institutions. Furthermore, through online events, we also managed to reach out to new groups of stakeholders, including technical experts, financial sector representations and NGOs. Notably, Prof Rovatsos has been invited to present at internal events at HSBC, RBS NatWest and the Financial Conduct Authority, leading to close and direct conversation with financial practitioners and broadening our research impact. Furthermore, Dr Koene has been invited to participate in the World Economic Forum's online event on AI and ethics, discussing the future of work for young people and the implications of AI in this critical time of change in our society. Dr Elvira Perez Vallejos has been invited by the Royal College of Psychiatrist to report on the impact of the online world on young peoples' mental health. At this critical time, our project team members have made full use of novel engagement channels, such as podcasts or YouTube broadcasts, to reach out to global audiences e.g. Asian Pathfinders, EPRA or UNESCO. (iii) Policy engagement Our policy outputs have been cited in several publications, including the European Parliament briefing report of EU guidelines on ethics in artificial intelligence: Context and Implementation and draft EU Committee on Legal Affairs report (April 2020). Furthermore, we continued to actively contribute to critical national and international policy consultations including UNESCO's Consultation on the Recommendation on the Ethics of Artificial Intelligence (July 2020), EPSRC's UKRI Artificial Intelligence and Public Engagement workshop. One of our key policy outcomes this year is the virtual policy engagement event in December 2020 jointly with the All-Party Parliamentary Group on Data Analytics. The event was chaired by APGDA Chair, Daniel Zeichner MP, and included contributions from several senior researchers from the ReEnTrust. We presented four practical policy recommendations including i) the importance of increasing awareness-raising about algorithms; ii) considering diverse users' needs during future technology innovation; iii) explanations and transparency; and iv) supporting user empowerment, each addressing a different aspect of needs for better trust development in today's digital society. The event has led to several contributions regarding the report and the wider challenges associated with trust in data ethics, including from Lord Wallace of Saltaire, who noted the divergence in approaches to trust between younger and older people concerning the use of data-driven technologies. Participants also noted how data trust issues relate as much to the platform or institution as they do to the technology used. The success of this event has led to the planning of another joint virtual round table event before the completion of the ReEnTrust project, in conjunction with a more complete policy report. (iv) Standard development activities: Members of the project continued to work actively in various standardisation groups, collaborating closely with various industrial stakeholder, including the IEEE-SA P7003 Standard: Algorithmic Bias Considerations (led by Dr Koene and with Dr Dowthwaite as the secretary), the IEEE P2089 Standards for Age Appropriate Terms and Conditions, and the Confederation of British Industry AI working group. The IEEE P2089 has recently published its first draft standard with contributions from Dr Zhao and Dr Koene from the ReEnTrust project.
First Year Of Impact 2022
Sector Digital/Communication/Information Technologies (including Software)
Impact Types Societal,Policy & public services

 
Description APGDA and ReEnTrust host webinar on rebuilding and enhancing trust in algorithms
Geographic Reach National 
Policy Influence Type Implementation circular/rapid advice/letter to e.g. Ministry of Health
URL https://www.policyconnect.org.uk/appgda/news/apgda-and-reentrust-host-webinar-rebuilding-and-enhanci...
 
Description AlgoAware workshop. Ansgar Koene, Brussels
Geographic Reach Europe 
Policy Influence Type Participation in a advisory committee
Impact Contribution to high-level expert discussion with representatives from the European Parliament, the European Commission as well as Civil Society. The recent "State of the Art" report on automated decision-making by AlgorithmWatch (https://www.algoaware.eu/state-of-the-art-report/) was launched on this event. The UnBias project gets mentioned as examples of an initiative by academic research groups to develop Policy and technical tools (page 10), in particular with the UnBias "Fairness Toolkit" as a "clear example". The report also includes a summary of the aims of the UnBias project (page 57) and the elements of the "Fairness Toolkit" (with references to the relevant webpages).
URL https://www.algoaware.eu/2018/11/14/29-jan-2019-automating-society-taking-stock-of-automated-decisio...
 
Description Ansgar Koene participated in a round table discussion for the Royal United Services Institute (RUSI) for Defence and Security Studies, for the Briefing Paper "Data Analytics and Algorithmic Bias in Policing"
Geographic Reach National 
Policy Influence Type Participation in a advisory committee
URL https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/8317...
 
Description CDI Landscape Summary: Bias in Algorithmic Decision-Making cited in RUSI for Defence and Security Studies Briefing Paper
Geographic Reach National 
Policy Influence Type Citation in other policy documents
URL https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/8317...
 
Description Citation of "A governance framework for algorithmic accountability and transparency" in draft EU Committee on Legal Affairs report
Geographic Reach Europe 
Policy Influence Type Citation in other policy documents
URL https://www.europarl.europa.eu/doceo/document/JURI-PR-650508_EN.pdf
 
Description Comments on the European Data Protection Board's Guidelines 4/2019 on Article 25 Data Protection by Design and by Default
Geographic Reach Europe 
Policy Influence Type Participation in a national consultation
URL https://nottingham-repository.worktribe.com/preview/3774957/comments_on_edpb_guidelines_on_a_25_dpbd...
 
Description Dr Ansgar Koene invited as Chair for IEEE Computer Society AI Standards Committee
Geographic Reach Multiple continents/international 
Policy Influence Type Participation in a advisory committee
URL https://www.computer.org/publications/technews/insider/ai-standards-kick-off
 
Description Dr Ansgar Koene invited by UNESCO to contribute towards the Virtual Regional Consultation on the 1st draft of Recommendation on the Ethics of Artificial Intelligence (AI) on 27-28 July 2020
Geographic Reach Europe 
Policy Influence Type Participation in a national consultation
URL https://events.unesco.org/event?id=2763453844&lang=1033
 
Description Dr Ansgar Koene invited by the Chartered Governance Institute UK & Ireland to participate in online webinar contributing towards CPD
Geographic Reach National 
Policy Influence Type Influenced training of practitioners or researchers
URL https://www.cgi.org.uk/events/networking-and-cpd-events/cpd-events/ai-and-boardroom-technology
 
Description Dr Ansgar's report for the EU entitled Governance frameworks for Algorithmic Accountability and Transparency is cited in the European Parliament briefing report EU guidelines on ethics in artificial intelligence: Context and Implementation
Geographic Reach Europe 
Policy Influence Type Citation in other policy documents
URL https://www.europarl.europa.eu/thinktank/en/document.html?reference=EPRS_BRI(2019)640163
 
Description Dr Ansgar's report for the EU entitled Governance frameworks for Algorithmic Accountability and Transparency is cited in the European Parliament briefing report EU guidelines on ethics in artificial intelligence: Context and Implementation
Geographic Reach Europe 
Policy Influence Type Citation in other policy documents
URL https://www.europarl.europa.eu/thinktank/en/document.html?reference=EPRS_BRI(2019)640163
 
Description Dr Koene invited to be on the Advisory Committee of the proposal FINDER, WSSC
Geographic Reach Europe 
Policy Influence Type Influenced training of practitioners or researchers
URL https://www.upf.edu/web/wssc/
 
Description Dr Koene invited to participate in an online workshop on December 14th 2020 to Bath University CDT students entitled AI related standards for Bath's CDT in Accountable Responsible and Transparent AI. Dr Koene presented: • Algorithmic Bias with Ansgar Koene, Working group chair for IEEE Standard on Algorithm Bias Considerations (P7003)
Geographic Reach Local/Municipal/Regional 
Policy Influence Type Influenced training of practitioners or researchers
URL https://cdt-art-ai.ac.uk/news/events/the-global-ai-standards-landscape-an-extended-seminar/
 
Description EPSRC - UKRI Artificial Intelligence and Public Engagement workshop
Geographic Reach National 
Policy Influence Type Participation in a advisory committee
 
Description Input of research evidence towards a POSTNote Online Safety Education, UK Parliament
Geographic Reach National 
Policy Influence Type Implementation circular/rapid advice/letter to e.g. Ministry of Health
URL https://researchbriefings.parliament.uk/ResearchBriefing/Summary/POST-PN-0608
 
Description Joint Comments to ICO's Draft Code of Age appropriate design from ReEnTrust Oxford Team
Geographic Reach National 
Policy Influence Type Participation in a national consultation
URL https://ico.org.uk/about-the-ico/ico-and-stakeholder-consultations/age-appropriate-design-a-code-of-...
 
Description Joint letter of response (2ith 5 Rights) to FTC regarding COPPA Rule
Geographic Reach Multiple continents/international 
Policy Influence Type Implementation circular/rapid advice/letter to e.g. Ministry of Health
Impact Children's Online Privacy Protection Act TAGS: Consumer Protection MISSION: Consumer Protection LAW: 15 U.S.C. §§ 6501-6506 LINK: http://uscode.house.gov/view.xhtml?req=granuleid%3AUSC-prelim-title15-section650... This Act protects children's privacy by giving parents tools to control what information is collected from their children online. The Act requires the Commission to promulgate regulations requiring operators of commercial websites and online services directed to children under 13 or knowingly collecting personal information from children under 13 to: (a) notify parents of their information practices; (b) obtain verifiable parental consent for the collection, use, or disclosure of children's personal information; (c) let parents prevent further maintenance or use or future collection of their child's personal information; (d) provide parents access to their child's personal information; (e) not require a child to provide more personal information than is reasonably necessary to participate in an activity; and (f) maintain reasonable procedures to protect the confidentiality, security, and integrity of the personal information. In order to encourage active industry self-regulation, the Act also includes a "safe harbor" provision allowing industry groups and others to request Commission approval of self-regulatory guidelines to govern participating websites' compliance with the Rule. Link to call for public comment by FTC: https://www.regulations.gov/document?D=FTC-2019-0054-0001
URL https://uscode.house.gov/view.xhtml?req=granuleid%3AUSC-prelim-title15-section6501&edition=prelim
 
Description Led work on government commissioned Landscape Summary on Bias in Algorithmic Decision-Making for Centre for Data Ethics and Innovation
Geographic Reach National 
Policy Influence Type Gave evidence to a government review
URL https://www.gov.uk/government/publications/landscape-summaries-commissioned-by-the-centre-for-data-e...
 
Description Member of Steer Committee for the APGDA Policy Connect Report on Trust, Transparency, Tech on Data and Technology Ethics
Geographic Reach National 
Policy Influence Type Participation in a advisory committee
URL https://www.policyconnect.org.uk/appgda/sites/site_appgda/files/report/454/fieldreportdownload/trust...
 
Description ORBIT RRI training workshop to TAS Hub
Geographic Reach Local/Municipal/Regional 
Policy Influence Type Influenced training of practitioners or researchers
URL https://www.tas.ac.uk/responsible-research-and-innovation/
 
Description ReEnTrust Joint Comments to the ICO and The Turing Institute's Consultation on Explaining AI
Geographic Reach National 
Policy Influence Type Participation in a national consultation
URL https://ico.org.uk/about-the-ico/ico-and-stakeholder-consultations/ico-and-the-turing-consultation-o...
 
Description ReEnTrust project team invited to participate in and online briefing event hosted by the All Party Parliamentary Group on Data Analytics: about their research on rebuilding and enhancing trust in algorithms. "This work informs both policymakers, businesses and economists by suggesting new approaches and technology that will enhance trust in algorithms and therefore enable online platforms to make the most out of such technology during a time when we are heavily reliant on online economy."
Geographic Reach National 
Policy Influence Type Influenced training of practitioners or researchers
URL https://www.policyconnect.org.uk/appgda/events/rebuilding-and-enhancing-trust-algorithms
 
Description Submission to UK Government call for evidence - The right to privacy: digital data
Geographic Reach National 
Policy Influence Type Participation in a national consultation
URL https://nottingham-repository.worktribe.com/output/7466129
 
Description UnBias project cited in policy brief for the All Party Parliamentary Group on Data Analytics: rebuilding and enhancing trust in algorithms
Geographic Reach National 
Policy Influence Type Citation in other policy documents
URL https://www.policyconnect.org.uk/appgda/news/apgda-and-reentrust-host-webinar-rebuilding-and-enhanci...
 
Title Artificial database generated for hotel recommendation algorithm scenario (led by the Edinburgh team) 
Description For the development of our research tool prototype, Algorithm Playground, we need to have a large fictional dataset about users, hotels and their booking history. To generate this dataset, we reused data from an existing hotel database, containing 100+ hotels in Paris, and generated thousands of fictional user profiles using Mockaroo, a synthetic data generation tool. 
Type Of Material Database/Collection of data 
Year Produced 2019 
Provided To Others? No  
Impact This dataset has been critical for the development of our prototype platform development and exploration of the impact of algorithm explanations on users' perception of trust. We don't have any external impact yet. 
 
Title Fictional dataset for understanding the meaningful of algorithmic explanations for human participants 
Description The dataset is based on one originally published by the Princeton Dialogues on AI and Ethics (https://aiethics.princeton.edu/wp-content/uploads/sites/587/2018/12/Princeton-AI-Ethics-Case-Study-5.pdf). It is a unique free dataset that contains information about 220 features of fictional job seeks who were veterans and their job application outcomes. Due to the extreme sparsity of the dataset, we performed several comprehensive data analytics tasks to gain a better understanding of the distribution of the 220 features in the dataset and designed several optimisation strategies to improve the consistency of the dataset. Furthermore, to create the ideal experiment conditions for our research, we have also manipulated the original dataset and created three new datasets that contained particular biases (such as preferences of candidates with certain sports skills) or rationality issues (such as preferring interpersonal skills over technical skills). 
Type Of Material Database/Collection of data 
Year Produced 2020 
Provided To Others? No  
Impact The dataset was core to a Msc thesis which explored to what extent algorithmic explanations can provide "meaningful" information for people who are seeking for jobs online. The findings have been critical to our core project development, in relation to how to mediate users' trust of online recruitment systems. 
 
Description Chair of IEEE-SA P7003 Standard: Algorithmic Bias Considerations - Dr Ansgar Koene 
Organisation Institute of Electrical and Electronics Engineers (IEEE)
Country United States 
Sector Learned Society 
PI Contribution I proposed the P7003 standard for Algorithmic Bias Considerations as part of the IEEE Global Initiative for Ethics of Autonomous and Intelligent systems and now chair the IEEE working group for the development of the P7003 Standard for Algorithmic Bias Considerations. The work involves facilitating a monthly conference call, promoting the work to attract working group participants and providing the general outline for the standards documents. Liz Dowthwait from the UnBias/ReEnTrust project is the P7003 secretary, supporting my work by helping with the monthly agenda and conference minutes.
Collaborator Contribution The IEEE Standards Association (IEEE-SA provides the liaison officer to assist with coordination of the P7003 working group with the wider IEEE Standards activities. IEEE-SA also provides the web-conferencing facilities (Join.me) and online document hosting/collaboration space (iMeet). The IEEE Global Initiative has been facilitating media engagement and policy engagement activities (e.g. ACM/IEEE panel on Algorithmic Transparency and Accountability in Washington DC where I was a panel member). From September 30 to October 3rd, IEEE-SA hosted a workshop for the P70xx standards working groups in Berlin (IEEE-SA paid for travel and accommodation).
Impact Dr Ansgar Koene was contacted by the ACM to participate in a panel discussion on Algorithmic Transparency and Accountability in Washington DC on 14th September 2017. https://www.acm.org/public-policy/algorithmic-panel. There was a write-up of the panel discussion, with follow-up interview with Ansgar "IEEE and ACM Collaborations on ATA", published in AI Matters: A Newsletter of ACM SIGAI, on 1 October 2017. Ansgar's role as chair and a summary of the P7003 Standard were mentioned in the article: "The Ethics of Artificial Intelligence for Business Leaders - Should Anyone Care?", in TechEmergence, on 9th December 2017. Ansgar was invited to publish a short paper on the IEEE P7003 activity in IEEE Technology and Society Magazine, "Algorithmic Bias: Addressing Growing Concerns", IEEE Technology and Society Magazine, Volume: 36, Issue: 2, June 2017. [DOI: 10.1109/MTS.2017.2697080]. An interview with Ansgar about P7003 was published in The Institute (The IEEE news source), on 12 September 2017, "Keeping Bias From Creeping Into Code".
Start Year 2017
 
Description Expert Focus Group leader for developing Algorithmic Bias certification criteria for the IEEE ECPAIS - Dr Ansgar Koene 
Organisation Institute of Electrical and Electronics Engineers (IEEE)
Department IEEE Standards Association
Country United States 
Sector Charity/Non Profit 
PI Contribution I was invited by the IEEE Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS) leadership to take on the role of Expert Focus Group leader for developing Algorithmic Bias certification criteria for the IEEE ECPAIS. Over a period of 8 months I lead a team of 8 expert volunteers to develop a list of satisfaction criteria to service as the basis for the IEEE ECPAIS Algorithmic Bias certifiaction. IEEE is not preparing a service offering under which organisations will be able to request that their governance processes for the development of AI systems are certified on the basis of the satisfaction criteria my working group developed. This service is to be launched in the second half of 2020.
Collaborator Contribution IEEE coordinated the work, raised interest among stakeholders, e.g. the Finnish Ministry of Interior and the city of Vienna, organised face-to-face meetings and payed associated expenses.
Impact https://standards.ieee.org/news/2020/phase-1-ecpais.html
Start Year 2019
 
Description ISCO-UK "User Trust" (2018 - Still Active) 
Organisation Internet Society (ISOC)
Country United States 
Sector Charity/Non Profit 
PI Contribution Our collaboration with ISOC-UK was developed in 2018 during our work within the UnBias research project (EP/N02785X/1) when we run two workshop hosted by ISCO-UK. The first workshop was a panel discussion on "Multi-Sided Trust in Multi-Sided platforms" https://unbias.wp.horizon.ac.uk/2018/04/13/isoc-uk-horizon-der-panel-for-multi-sided-trust-on-multi-sided-platforms/ . The second workshop was a demonstration of the UnBias "Fairness Toolkit" with a session using the Awareness Cards https://unbias.wp.horizon.ac.uk/2018/11/23/workshop-on-algorithmic-awareness-building-for-user-trust-in-online-platforms/. More recently ISOC has invited the ReEnTrust team to run an ISOC hosted event (option considered for future engagements).
Collaborator Contribution Invitation to ReEnTrust to do an ISOC hosted event (e.g, workshops).
Impact The collaboration is multi-disciplinary due to the nature of ISOC as a multi-disciplinary membership organisation with members from a wide range of industry, civil-society and academic groups who are interested in an open and free internet.
Start Year 2018
 
Description Participation in CBI AI Working Group - Prof Michael Rovatsos 
Organisation Confederation of British Industry (CBI)
Country United States 
Sector Private 
PI Contribution Provided input to AI ethics guidelines for CBI membership.
Collaborator Contribution Developed AI ethics guidance for their membership.
Impact Further roundtables with CBI and their members and other stakeholders, e.g. Centre for Data Ethics and Innovation.
Start Year 2019
 
Description Participation in the IEEE-SA P2089: Standards for Age Appropriate Terms and Conditions 
Organisation Institute of Electrical and Electronics Engineers (IEEE)
Country United States 
Sector Learned Society 
PI Contribution I was invited to be a member of the IEEE-SA Working Group to develop Standards for Age Appropriate Terms and Conditions by the 5Rights Foundation in late 2019. Mainly, I am a member of the 10-member subgroup, developing an age-appropriate presentation for children and young people. The group involves participants from other research institutions, industrial and third sectors. So far the work has been involving participating in a quarterly conference call, to review existing approaches in support of this goal and identify gaps and new approaches to support the group goal.
Collaborator Contribution The IEEE Standards Association (IEEE-SA) provides the liaison officer to assist with the kick-off of the working group and coordination with the wider IEEE Standards activities. IEEE-SA also provides the web-conferencing facilities (zoom) and online document hosting/collaboration space (iMeet).
Impact The working group is highly multi-disciplinary, including not only academics from design, computer science, children's development and social sciences, but also legal practitioners, policy and public service providers.
Start Year 2019
 
Description Secretary of IEEE-SA P7003 Standard: Algorithmic Bias Considerations 
Organisation Institute of Electrical and Electronics Engineers (IEEE)
Country United States 
Sector Learned Society 
PI Contribution I support the work of the P7003 standard as part of the IEEE Global Initiative for Ethics of Autonomous and Intelligent systems by organising and minuting the monthly working group meetings. The group (EEE working group for the development of the P7003 Standard for Algorithmic Bias Considerations) is chaired by Ansgar Koene, also a member of the UnBias project.
Collaborator Contribution The IEEE Standards Association (IEEE-SA) provides the liaison officer to assist with coordination of the P7003 working group with the wider IEEE Standards activities. IEEE-SA also provides the web-conferencing facilities (WebEx/join.me) and online document hosting/collaboration space (iMeet). The IEEE Global Initiative has been facilitating media engagement and policy engagement activities.
Impact Minutes of all meetings are available at http://sites.ieee.org/sagroups-7003/
Start Year 2017
 
Title Algorithm Playground 
Description A web-based platform to demonstrate how e-recruitment systems work and to explain how job applicants are ranked by e-recruitment algorithms. The platform was used to examine users' perceptions on e-recruitment systems and the impact of explanations on such perceptions. 
Type Of Technology Webtool/Application 
Year Produced 2020 
Impact This software was used as a key research tool in various engagement activities with users to gather quantitative and qualitative data. The experiments and workshops conducted with it have shaped the methodological direction of technology development in the project. We plan to make a future, revised version of the system available online. 
URL http://psandbox.pythonanywhere.com/
 
Title Algorithmic Playground 
Description A web-based platform developed to enable citizens to play with algorithm inputs and observe changes in the results. Focused on a hotel booking scenario where different recommender systems algorithms including content-based and collaborative filtering. 
Type Of Technology Webtool/Application 
Year Produced 2019 
Impact This software was used as a key research tool in various engagement activities with users to gather quantiative and qualitative data. The experiments and workshops conducted with it have shaped the methodological direction of technology development in the project. We plan to make a future, revised version of the system available online. 
 
Title Fake booking website 
Description Based on a 100 hotels database, the tool offers the possibility to book a hotel in Paris. However, each step of the booking is disturbed by controversial commercial practices (price timer, price changing, extra options, etc). 
Type Of Technology Webtool/Application 
Year Produced 2019 
Open Source License? Yes  
Impact This webtool has been used during interviews to measure the factors of trust breakdown during an hotel booking scenario. It has helped us to understand better the boundaries of trust in that scenario so we have been able to build an algorithm playground around hotel recommendation. 
 
Title Mediation tool (prototype) 
Description A mediation tool which implements different mediation mechanisms that aim to be tested in a study. 
Type Of Technology Webtool/Application 
Year Produced 2021 
Open Source License? Yes  
Impact The implementation helps us to refine the mediation process. The tool will be used to measure the important of mediation in rebuilding trust. 
URL http://mediationtool.pythonanywhere.com/newsfeed/index.html
 
Description Industry Forum roundtable on 'Restoring trust in digital technology' 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Industry/Business
Results and Impact Marina Jirotka was invited to participate in panel discussion to give her views on restoring trust in digital technologies. This was an invitation only event organised by the Industry Forum, a high profile group that seeks to promotes constructive dialogue between public policy makers, industry operating in the UK, and leading commentators. Details of the panel event can be read here http://www.industry-forum.org/event/restoring-trust-in-digital-tech/. Marina introduced ideas about the what it means to trust digital systems and the capacity for responsibility practices that can foster trust. This lead to discussion amongst those present about what kinds of practices can be put in place at the levels of design and policy.
Year(s) Of Engagement Activity 2019
URL http://www.industry-forum.org/event/restoring-trust-in-digital-tech/
 
Description "How does trust affect your experience of the Internet?" 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Public/other audiences
Results and Impact Fifteen participants aged 16-25 years old took part in this activity run at the Nottingham City Council opened to the general public.
It was a hands-on activity aimed to explore young people's online experiences when interacting with algorithm driven platforms. Participants shared their views and experiences and expressed their will to take part in related activities (e.g.,some have became members of the ReEnTrust advisory group).
Year(s) Of Engagement Activity 2019
 
Description "they don't really listen to people". Young people's concerns and recommendations for improving online experiences.Helen Creswick, Liz Dowthwaite, Ansgar Koene, Elvira Perez Vallejos, Virginia Portillo, Monica Cano and Christopher Woodard 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact This presentation highlighted young people (13-17 years old) concerns in relation to online issues of algorithm bias. In particular users' lack of agency and feelings of disempowerment in young people's internet use, exacerbated by their experiences of online terms and conditions.The audience engaged with interesting questions about the applicability of these findings to the adult population.
Researchers were invited to submit an enhanced version of the research paper to a special issue of the Journal of Information, Communication and Ethics in Society, which has been recently accepted for publication but is not available online yet.
Year(s) Of Engagement Activity 2018
URL https://easychair.org/smart-program/ETHICOMP2018/2018-09-24.html#talk:78107
 
Description 'Adult Education in a Digital World'- Helen Creswick and Elvira Perez Vallejos 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Public/other audiences
Results and Impact Event aimed to celebrate 100 years of adult education and to question how the digital age will influence the future of adult education. Attendees were aged 65 years old and over.
Our presentation outlined the challenges that older adults may face in establishing mechanisms that help them to build trust on the Internet. Members of the audience engaged with our research team asking interesting questions and expressing interest in taking part in future related activities
Year(s) Of Engagement Activity 2019
 
Description Aardman Workshops 'What's Up with Everyone?' (TrustScapes) 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Study participants or study members
Results and Impact A series of workshops (n=10) have been organised to engage with young people and hear their feedback and suggestions for improvement of a website developed to increase young people's mental health literacy. The webpage contains resources relevant for mental health self-help seeking and five animated movies co-produced with industry partner Aardman Animations. The topic of discussion on the trustworthiness of these online resources.
Year(s) Of Engagement Activity 2021
URL http://www.whatsupwitheveryone.com
 
Description Aardman workshop 'What's Up with Everyone?' (Youth Jury) 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Study participants or study members
Results and Impact A series of workshops (n=10) have been organised to engage with young people and hear their feedback and suggestions for improvement of five animated movies co-produced with industry partner Aardman Animations. ReEntrust contributed to the development of one of the movies titled 'Social Media' and it explored the impact of the online world in young people's mental health.
Year(s) Of Engagement Activity 2021
URL http://www.whatsupwitheveryone.com
 
Description Algorithm Playground study with hotel booking scenario 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Public/other audiences
Results and Impact We organised a survey of the role of explanations in trust while using a booking website. This experiment uses the Algorithm Playground we implemented, browsing activity was logged on a dedicated database which was analysed in addition to the answers to a questionnaire submitted by around 200 participants. Participant responses guided the subsequent development of further technogical tools and experimental methodology, in particular with a new emphasis on explanations.
Year(s) Of Engagement Activity 2019
 
Description Algorithm, trust and data regulation 
Form Of Engagement Activity Participation in an open day or visit at my research institution
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Industry/Business
Results and Impact A group of 25 Chinese academic, business and government representatives had a one-day research visit to Oxford. I was invited to present the ethics and AI research work carried out in our research team and project. The presentation has inspired many following-up discussions about the process of data regulation and algorithm transparency policy development in the UK and Europe. Participants in the audience mentioned that these are key lessons for them to take on board in relation to the corresponding development in China.
Year(s) Of Engagement Activity 2019
 
Description Algorithmic awareness building for User Trust in online platforms. ISOC-UK England, London 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Third sector organisations
Results and Impact Interactive session aimed at exploring awareness building around the use of algorithms in online platforms, through the use of the UnBias Awareness Cards. Activities resulted in critical and civic thinking for exploring how decisions are made by algorithms, and the impact that these decisions may have on our lives and the lives of others. Participants engaged in interesting discussions and extra decks of cards were requested by some attendees.
Year(s) Of Engagement Activity 2018
URL https://unbias.wp.horizon.ac.uk/2018/11/23/workshop-on-algorithmic-awareness-building-for-user-trust...
 
Description Animation video titled Algorithms and US 
Form Of Engagement Activity Engagement focused website, blog or social media channel
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact The animation was created to explain the main messages from the ReEnTrust project to a general audience; the animation was considered to be an exemplary outline of the issues raised by AI/algorithms and was given wide public distribution by the company
Year(s) Of Engagement Activity 2021
URL https://youtu.be/BBuMsw5E_E0
 
Description Ansgar Koene featured speaker at Purdue conference: "Policies for Progress" 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Policymakers/politicians
Results and Impact The Purdue Policy Research Centre at Purdue University, held to "Policies for Progress" conference exploring ways to bring together policymakers, industry leaders, not-for-profits, and academics to bring their collective expertise to bear to address Wicked problems."Policies for Progress" was the capstone event for the Breaking Through: Developing Multidisciplinary Solutions to Global Grand Challenges research project funded by The Andrew W. Mellon Foundation.

Experts from four multidisciplinary teams shared findings and results of their groundbreaking work on the Breaking Through research project. Stakeholders who were integrated into these projects discussed the successes, benefits, as well as challenges in partnering with academia.

In recognition of our extensive work on multi-stakeholder engagement in our past and current projects Ansgar Koene was invited to speak about the work we did on the UnBias project, the associated IEEE P7003 Standard for Algorithmic Bias Considerations development and our current activities for ReEnTrust.
Year(s) Of Engagement Activity 2019
URL https://www.purdue.edu/breaking-through/
 
Description Appearance on "Brainwaves: Love, Life, and Algorithms" radio programme, BBC Scotland, February 2019 
Form Of Engagement Activity A broadcast e.g. TV/radio/film/podcast (other than news/press)
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Public/other audiences
Results and Impact Contributed to radio programme that explored impact of algorithms on daily life on national radio with a wide audience reach.
Year(s) Of Engagement Activity 2019
URL https://www.bbc.co.uk/programmes/m0002b1y
 
Description Article " We may not cooperate with friendly machines" in Nature Machine Intelligence, November 2019 
Form Of Engagement Activity A magazine, newsletter or online publication
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact Published review of and commentary on ethical and methodological issues in a recently published Nature paper that explored whether humans are likely to collaborate with AI systems depending on whether they are aware their counterpart is human or not. The authors of the original article acknowledged the critique and this will inform their future methodology.
Year(s) Of Engagement Activity 2019
URL https://www.nature.com/articles/s42256-019-0117-1
 
Description Articles in the Oxford University Computer Science Inspired Research Magazine 
Form Of Engagement Activity A magazine, newsletter or online publication
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact Inspired Research is a twice-yearly newsletter published by the Department of Computer Science, University of Oxford. It is widely circulated to all alumni as well as other general public audience. The newsletter reports the most impactful research outcomes from the department over a 6-month period. Our publication provides a timely summary of our inputs to the ICO Code for Age Appropriate Design and a reflection of the anticipated impact of this new data protection regulation for safeguarding children's online safety - both academic and non-academic audience
Year(s) Of Engagement Activity 2019
URL http://www.cs.ox.ac.uk/inspiredresearch/InspiredResearch-winter2019.pdf
 
Description Artificial Intelligence and its Applications Institute (AIAI) seminar on "Building and Rebuilding Trust in Algorithmic System" at University of Edinburgh, Edinburgh, United Kingdom 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Other audiences
Results and Impact The main focus of the talk was to present the findings of our studies in ReEntrust Project to other researchers and academics at the AIAI, University of Edinburgh and to get feedbacks on the findings.
Year(s) Of Engagement Activity 2021
URL https://web.inf.ed.ac.uk/aiai/events/seminars/18-jan-2021-gideon-ogunniye
 
Description Can we trust what we see online? Futurum online article 
Form Of Engagement Activity A magazine, newsletter or online publication
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Schools
Results and Impact This article was produced by Futurum, a magazine and online platform aimed at inspiring young people to follow a career in the sciences, research and technology. For more information, teaching resources, and course and career guides, see www.futurumcareers.com
Year(s) Of Engagement Activity 2019
URL https://futurumcareers.com/can-we-trust-what-we-see-online
 
Description Contribution to 100+ Brilliant Women Conference chairing youth panel awards, Oxford 16th of Sept 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Undergraduate students
Results and Impact I was delighted to announce the winners from the schools' competition on AI
Year(s) Of Engagement Activity 2019
 
Description Dr Ansgar Koene Podcast interview with PassWOrd 
Form Of Engagement Activity A broadcast e.g. TV/radio/film/podcast (other than news/press)
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Public/other audiences
Results and Impact https://www.mixcloud.com/FI_PassW0rd/pity-the-poor-children-for-they-know-not-what-you-do/
Year(s) Of Engagement Activity 2019
 
Description Dr Ansgar Koene interview by CEO of Zupervise.com 
Form Of Engagement Activity Engagement focused website, blog or social media channel
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Industry/Business
Results and Impact Dr Koene was interviewed for a blog series by the CEO of Zupervise, (AI risk assessment start up) real-time analytics platform, purpose-built for the three lines of defence to analyse, optimise & govern AI Risks in the regulated enterprise:

What are your thoughts about the regulatory policy landscape on AI governance?
I think there's been a recognition that AI is being used in so many use-cases and therefore regulators that are focusing on multiple sectors need to gain some kind of understanding around how AI will potentially have an impact on the way in which current regulations are trying to provide safety & good operating practice in their sector.

At the moment we've got a bit of an exploration that's going on including on the question of whether there should be a new regulator focusing on AI exclusively, or should it be a case of different regulators needing to just be up-skilled and be empowered to deal with AI questions as is relevant to them. In-fact, the discussion in policy circles started from the point of view of AI at a conceptual level with the debates around AI as some form of automation of both decision making & human intellectual activity. We absolutely want to make sure that humans maintain agency, we want to ensure that there is human oversight.

The big focus of 2018-2019 was principles but now, in 2021 we're at the stage of how do you transform these principles into practical rules and regulations to deal with the challenges that arise from AI. It also becomes more necessary to start to think about the fact that, not all AI is the same thing - machine learning is different from other forms of AI, computer vision is different from natural language processing, which is different from recommendation systems.

Maybe we don't want to have a single regulator to cover all of these, maybe we actually do need to be focusing on each sector with tailored regulation. I think that conversation is ongoing, it hasn't really been resolved yet. In the UK, for instance, the data protection authority, the ICO has been tasked to a large extent to try to deal with questions to do with AI primarily around data privacy but they're being pushed to go beyond the remit of personal data so it's an ongoing discussion.

What are the biggest challenges in regulating AI?
I think one of the biggest challenges that is becoming visible is the question of how do we regulate the addition of an AI component to an existing process that needs governance. There are few exceptions like autonomous vehicles which may be conceived to be adding something completely new to the transportation mix. For example, with AI-enabled recruitment, we're simply automating part of the hiring process by pre-filtering with AI.

Primarily, in such a case of adding AI to an existing process, it's not really clear how to identify if this AI actually introduces new risks that need to be regulated differently. Let's take for instance the example of AI in hiring. At the core you're not allowed to discriminate based on race or gender or other non relevant factors when it comes to the hiring process and really whether or not you're doing this discrimination through an AI or you're doing it through human decision making is pretty much beside the point. What you are regulating is that there shouldn't be any discrimination and so that raises the question on whether we need to change anything in the regulation there or is it just a case of maybe we need a new process for providing the evidence that you are complying with the existing regulation, by applying the appropriate risk assessments.

Within the enterprise, who cares the most about such AI risks?
I guess it's currently still largely approached from the traditional compliance led approach - the development teams are primarily focusing on achieving the functional requirements of the system and then we have compliance teams that are assessing whether you are compliant with various regulatory issues. There isn't really a sort of structural introduction yet, I think, of assessing ethical risks or the societal impact kind of risks . As part of the larger discussion around business ethics and ESG (environmental social governance) these kind of questions need to be answered.

How does one get started with governing AI?
I think you start with a clear AI strategy. When you're creating your requirements set, do you have clear justifications for why you're making certain choices on the selection of the data sets that you're going to use. Further still, have you documented your process for how you've collected the data and how you've chosen which data set to use. To a large extent, this comes down to the ability to document what you've done and to provide justifications for these. I'm focusing on the documentation aspect here because this is the kind of challenge that we see now with the attempts at AI auditing, but then there is insufficient documentary evidence to certify anything.

What is your opinion on AI regulation proposed by academia & it's applicability to enterprise AI use cases?
It's a very broad question as there are quite a number of approaches that are being taken in academia. There are some that are trying to get into technical methods within the computer science community. We've had various academics who have been trying to operationalise a definition of AI fairness.

On the one hand, this is something that you can build into a toolkit to introduce that into development cycle. On the other hand, it is also being criticised within academia as an insufficient understanding of the bigger picture on the need to provide transparency on the decision making process. We have a part of the academic literature that is focusing on ethical concerns to identify application areas where AI is is not the appropriate solution approach. These tend to be something that address government and policymakers.

For instance, the conversation around when is probabilistic machine-led decision making fundamentally not appropriate to a certain type of domain. In society, we can think of the criminal justice system where I would say, people should be judged based on what they did, not based on whether they seem to fall in a population groups that statistically has been shown to end up in prison more often. These kinds of discussions are not really something that addresses enterprise so much as they are things that address government regulation.

Where is the AI regulation discourse leading to?
I think we are we are heading into a direction where these technologies are having a significant impact on people and on society.

Therefore, they need to be regulated in a similar way to other kinds of domains like vaccines in healthcare or the safety measures that we need with self-driving cars in transportation. As a result, it is going to transform this space into something that will be more strongly regulated with certification regimes. It may still take a little bit of time if we consider the countries that are leading on the the process of developing AI regulation - Singapore, Europe to a large extent also the UAE, which is exploring in this space - they are currently performing regulatory gap analysis and drafting new regulation proposals.

This is still going to take probably at least a year to crystallise and then another year or so to formulate into clear regulations. Within that time period we will be seeing the publication of more technical standards around the space. The ISO/IEC's joint technical committee on AI as well as the AI work of the IEEE-SA has really picked up steam and is likely to start publishing standards in the run up to this year. We will soon be getting a bigger body of guidance of what this best practice looks like. Also, in the US, we are seeing greater attention to the question about how an assessment of these systems should work and what role benchmarks can play.

I expect that in about 3 years time we will probably be looking at a completely different landscape regarding any regulation on AI.
Year(s) Of Engagement Activity 2021
URL https://zupervise.com/the-future-of-ai-regulation
 
Description Dr Ansgar Koene interview with Digital Future Society blog psot 
Form Of Engagement Activity Engagement focused website, blog or social media channel
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Public/other audiences
Results and Impact Dr Koene was invited for an interview on the risks and use of black box algorithms in HR by Digital Future Society in Feb 2020. The bog post became live in Aug 2020 and is available to interested parties.
Year(s) Of Engagement Activity 2020
URL https://digitalfuturesociety.com/qanda/ansgar-koene-and-the-risks-of-the-use-of-black-box-algorithms...
 
Description Dr Ansgar Koene invited as a panellist in the European of Platform of regulatory authorities EPRS Plenary 2 Podcast 
Form Of Engagement Activity A broadcast e.g. TV/radio/film/podcast (other than news/press)
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Policymakers/politicians
Results and Impact Dr Ansgar Koene was invited to participate as a panellist in the EPRA Plenary 2 Podcast - Media Plurality in the age of algorithms: Transparency and Trust - the user's perspective in online content navigation. The podcast will be available on the EPRA website from Thursday 12th November.

During the Podcast, EPRA Vice Chairperson Mari Velsand invited panellists to discuss:

the concepts of transparency, trust and critical engagement in the context of journalistic news content increasingly curated and delivered to audience by means of algorithms and
the role of regulators, journalists and the tech industry in ensuring that online news consumption better supports pluralistic and engaged democratic discourse.
Year(s) Of Engagement Activity 2020
URL https://shows.acast.com/epra/episodes/transparency-and-trust-the-users-perspective-in-online-conte
 
Description Dr Ansgar Koene invited as a webinar Panel Member by Logically (a fact checking company) 
Form Of Engagement Activity A broadcast e.g. TV/radio/film/podcast (other than news/press)
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Public/other audiences
Results and Impact Dr Ansgar Koene invited as a panel member by 'The Content Leads' Logically to participate in a webinar broadcast on YouTube:

Fake news has entered the global lexicon in the last four years. Online platforms unite communities from across continents; however, greater interconnectivity has also broadened the scope of mis/disinformation. Following sessions on misinformation in India and the US, this session by Logically looks at global trends and seek to identify how the confluence of misinformation, journalism and social media may converge or diverge in the coming years. It will seek to cover a broad range of issues, including:

How does fake news influence international relations between countries?

Does fake news predominantly come from a small group of malicious global actors? To what extent is the public responsible for innocently disseminating and pollinating such information?

In using AI to tackle fake news, how problematic are the challenges of inscrutability (the models defy human understanding) and non-intuitiveness (why do the statistical relationships exist as they do?) in applying transparency effectively?

Are certain countries more/less immune to fake news? Will the Finnish model of teaching school pupils to spot slippery information the way to go?
Or is the recent Brazilian Bill passed by the Senate the way to go? What trade-offs does it present for privacy and freedom of expression?

How might fake news evolve and evade identification in the coming years? What is being done to combat this?

Speakers:

- Damian Collins MP, Chair, DCMS Subcommittee on Disinformation
- Marianna Spring, Specialist Disinformation and Social Media Reporter, BBC News
- Viji Alles, Presenter, BBC Radio 4 (Moderator)
- Lyric Jain, Founder and CEO, Logically
- Ansgar Koene, Global AI Ethics and Regulatory Leader, EY
- Hazel Baker, Global Head of UGC Newsgathering, Reuters
Year(s) Of Engagement Activity 2020
URL https://youtu.be/woD2ruvbsPo
 
Description Dr Ansgar Koene invited as a webinar Panel Member by Logically (a fact checking company) 
Form Of Engagement Activity A broadcast e.g. TV/radio/film/podcast (other than news/press)
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Public/other audiences
Results and Impact Dr Ansgar Koene invited as a panel member by 'The Content Leads' Logically to participate in a webinar broadcast on YouTube: Fake news has entered the global lexicon in the last four years. Online platforms unite communities from across continents; however, greater interconnectivity has also broadened the scope of mis/disinformation. Following sessions on misinformation in India and the US, this session by Logically looks at global trends and seek to identify how the confluence of misinformation, journalism and social media may converge or diverge in the coming years. It will seek to cover a broad range of issues, including: How does fake news influence international relations between countries? Does fake news predominantly come from a small group of malicious global actors? To what extent is the public responsible for innocently disseminating and pollinating such information? In using AI to tackle fake news, how problematic are the challenges of inscrutability (the models defy human understanding) and non-intuitiveness (why do the statistical relationships exist as they do?) in applying transparency effectively? Are certain countries more/less immune to fake news? Will the Finnish model of teaching school pupils to spot slippery information the way to go? Or is the recent Brazilian Bill passed by the Senate the way to go? What trade-offs does it present for privacy and freedom of expression? How might fake news evolve and evade identification in the coming years? What is being done to combat this? Speakers: - Damian Collins MP, Chair, DCMS Subcommittee on Disinformation - Marianna Spring, Specialist Disinformation and Social Media Reporter, BBC News - Viji Alles, Presenter, BBC Radio 4 (Moderator) - Lyric Jain, Founder and CEO, Logically - Ansgar Koene, Global AI Ethics and Regulatory Leader, EY - Hazel Baker, Global Head of UGC Newsgathering, Reuters
Year(s) Of Engagement Activity 2020
URL https://youtu.be/woD2ruvbsPo
 
Description Dr Ansgar Koene invited as an expert panel member by Capgemini ELITE management team on a webinar entitled "Ethics in New Normal" 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Dr Koene participated in an online webinar organised by Capgemini (India) - a global leader in consulting, digital transformation, technology and engineering services: https://www.capgemini.com/ as an expert panel member on a webinar on Ethics in New Normal. Email feedback from Capgemini:

"I am sending this mail on behalf of the ELITE Engagement team (Hemanth, Shivani and myself) that conducted the 2nd edition of the Management Symposium "La Table Ronde". We are delighted to inform you that this webinar on Ethics in New Normal was a resounding success. We had an audience of more than 3.2k, which is the highest recorded for La Table Ronde yet.

The team would like to thank you for your contribution in the insightful discussion. Your inputs around ethics on the technology front and sentiments of the normal population and organizations in understanding/assessing the ethical scope of AI, were a great value addition and very much liked by the audience.

In these times, when the ecosystem is changing and all the contributors are becoming interconnected, we believe it is crucial to learn from other's experiences. Hence, we started this initiative and wish to expand our reach even further. It will only be possible with the support of generous industry veterans like yourself who are willing to contribute their precious time.

A big thank you. We strongly believe that everyone who attended the event had some valuable insights that they could takeaway with them and ponder about. We hope to have many more events like this on many more contemporary and important topics and that our association continues in making such events happen."
Year(s) Of Engagement Activity 2020
 
Description Dr Ansgar Koene invited by AI for Business 
Form Of Engagement Activity A broadcast e.g. TV/radio/film/podcast (other than news/press)
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Industry/Business
Results and Impact Dr Ansgar Koene was invited by ~AI for Business to do a long-form interview for their website: https://www.aiforbusiness.net/ai-interviews
Year(s) Of Engagement Activity 2020
URL https://youtu.be/b28Vn-QskHc
 
Description Dr Ansgar Koene invited by AIBE Speaker series as a panel member on webinar entitled AI Ethics Beyond Theory and into practical application 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Artificial Intelligence in Business and Ethics (AIBE) is the world´s largest non-profit AI summit. AIBE is the annual rendezvous of AI experts, students and professionals. Our mission is to enhance public understanding of artificial intelligence and data-driven technologies.

Dr Ansgar Koene was invited as an expert AI ethics panel member to explore the following topic:

Building trust in AI technologies by taking an ethical approach to AI adoption is a key enabler for businesses to succeed. Scaling the use of AI, however, has led to diverse ethical challenges. For example, autonomous vehicle incident liability, biases in automated recruitment software and substandard AI-based grading algorithms in education.

AI makes existing ethical challenges more urgent and creates new challenges. This panel will discuss how we move beyond the discussions, to begin applying and scaling ethics in AI projects across business, government, society and education.

Discussions will identify the current gap between theory and practice of governance in AI and draw on this panel's experiences of building and implementing ethical & responsible AI approach into teams, governance, and processes.
Year(s) Of Engagement Activity 2020
URL https://www.eventbrite.co.uk/e/ai-ethics-beyond-theory-and-into-practical-application-tickets-122814...
 
Description Dr Ansgar Koene invited by Humans of AI YouTube channel for contribution 
Form Of Engagement Activity A broadcast e.g. TV/radio/film/podcast (other than news/press)
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact Dr Koene contributed towards the Humans of AI series of interesting talks about AI technology with people innovating in that area: Ansgars presentation - @IEEE @EY - Harnessing AI as a tool, just like we did with electricity
Year(s) Of Engagement Activity 2020
URL https://www.youtube.com/watch?v=t5Y2dwsODUI
 
Description Dr Ansgar Koene invited by UNESCO to participate in the Virtual Regional Consultation on the first draft of the Recommendation on the Ethics of Artificial Intelligence (AI) on 27 - 28 July 2020 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Policymakers/politicians
Results and Impact Dr Ansgar Koene was invited as below: On behalf of UNESCO, the Kingdom of the Netherlands and the Rathenau Instituut we cordially invite you to participate in the Virtual Regional Consultation on the first draft of the Recommendation on the Ethics of Artificial Intelligence (AI) on 27 - 28 July 2020. UNESCO has embarked on a two-year process to elaborate the first global standard-setting instrument on ethics of AI. The first draft of the Recommendation has been produced by an Ad hoc Expert Group (AHEG) appointed by the Director-General of UNESCO. The Virtual Regional Consultation for the Europe Region (group 1) is part of a series of multi-stakeholder consultations worldwide between 7 July and 8 August 2020. Based on the feedback received during multi-stakeholder consultation process, the Ad hoc Expert Group will revise the Recommendation until September 2020. The main objective of the consultations is to discuss the first draft text of the Recommendation on the Ethics of AI and to include different cultural values and address various regional concerns to develop an inclusive and pluralistic global instrument. In this regional consultation, we highly welcome representatives from governments, academia, the private sector, civil society and youth in Western European and North American Member States to participate in this Virtual Regional Consultation. Your participation will help to enrich the draft Recommendation with your views on how to ensure an ethical use of AI around the world. We would like to ask you to carefully read the guidelines for participants attached to find more information about how to prepare for the consultation as well as guiding principles for our discussions.
Year(s) Of Engagement Activity 2020
URL https://en.unesco.org/artificial-intelligence/ethics
 
Description Dr Ansgar Koene invited by the WEF Global Shapers Brussels Hub to participate in an online event AI Ethics and the future of work 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Professor Ansgar participated in an online webinar organised by the Global Sharpers Brussels Hub entitled AI Ethics and the Future of Work (http://globalshapersbrussels.com/) - https://fb.watch/29LNWSlSqk/
Year(s) Of Engagement Activity 2020
 
Description Dr Ansgar Koene invited to participate in Applied AI - Women@EIT (European Insitute of Innovation and Technology: Women at EIT: https://women.eitalumni.eu/ ) 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Women at EIT: https://women.eitalumni.eu/ ) are hosting a workshop at the EIT Alumni Connect event (https://eit.europa.eu/our-communities/eit-alumni/eit-alumni-connect-2020 ) on the 28th of November at 4.30-5.30 pm CET. The workshop main aim is to raise awareness of data bias regarding gender equality in AI with relation to responsible production and consumption. The focus is on how we can use the opportunity of AI and data to become more gender equal (implementing of the SDG 5) and at the same time reach our sustainability goals. Discussions about what companies can do and think about and what the consequences are if we don't take this into consideration entering into a new area of AI.

Dr Ansgar Koene participated as a panel member - the panel provided different perspectives to discuss the topic during one hour.

Link to Women@EIT: https://women.eitalumni.eu/
Year(s) Of Engagement Activity 2020
URL https://eit.europa.eu/our-communities/eit-alumni/eit-alumni-connect-2020
 
Description Dr Ansgar Koene invited to participate in Dialogues with Asian Pathfinders 
Form Of Engagement Activity A broadcast e.g. TV/radio/film/podcast (other than news/press)
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Dr Ansgar Koene was invited by Asian Pathfinders: LinkedIn: https://www.linkedin.com/company/53193591 as an expert in Ethics and AI to participate in a webinar on 26 Nov 2020 entitled Adaptability of AI - security and challenges:

1) Ethics in AI - Dr. Ansgar Koene
Global AI Ethics & Regulatory Leader at EY, London

2) Military Dimensions of AI - Husanjyot Chahal
Research Analyst at Georgetown's Center for Security and Emerging Technology (CSET), Washington DC

3) Governance & AI - Walter M. Pasquarelli
Consultant, Oxford Insights, London

Charumati Haran, Student of Master of Public Policy at Hertie School in Berlin, Germany, will be moderating the session.

Date and Time:
Thursday, November 26, 2020
05:30PM - 06:45PM(IST) / 12:00PM - 01:15PM(GMT)
Year(s) Of Engagement Activity 2020
URL https://us02web.zoom.us/meeting/register/tZAkcOmuqToiEtLyufetciLDk-t1lLePGKuv
 
Description Dr Ansgar Koene participation in IEEE webinar 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Dr Ansgar Koene was invited to participate in an On-Demand Webinar entitled: how to design a digital world where children can thrive:

"Our expert panel discusses how the new IEEE Standard for an Age-Appropriate Digital Services Framework Based on the 5Rights Principles for Children (IEEE 2089™-2021) will equip organizations with the tools to put young people's best interests at the heart of the design of digital products and services. By following the practical steps outlined in this new framework, designers can ensure that children are catered for in the digital world"
Year(s) Of Engagement Activity 2022
URL https://engagestandards.ieee.org/Childrens-2089-webinar.html?utm_source=email&utm_medium=eblast&utm_...
 
Description Dr Ansgar invite to moderate a breakout session at the UNICEF Global Forum on AI for Children 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Other audiences
Results and Impact Dr Ansgar Koene participated as a panellist in a breakout session entitled "Ensure inclusion of and for children" during the UNICEF Global Forum on AI for Children on the 1st December
Year(s) Of Engagement Activity 2021
URL https://www.unicef.org/globalinsight/featured-projects/ai-children
 
Description Dr Koene approached by RAND Europe, a not-for-profit research organisation that helps to improve policy and decision making through research and analysis 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Third sector organisations
Results and Impact RAND Europe has been commissioned by Microsoft Belgium to review the landscape, and bring together evidence, on the use of labelling initiatives and codes of conduct for AI applications. The work will focus on identifying relevant examples of AI labelling schemes and initiatives, and codes of conduct, for the ethical and safe development of AI; and carry out a comparative analysis of the different examples to articulate common themes as well as notable divergences.

As part of this work, we are conducting scoping consultations with experts involved in or with knowledge of developments within the wider AI accountability (including labelling, codes of conduct) ecosystem. Given your experience in the field, RAND would greatly appreciate the opportunity to conduct a brief interview with you on this subject. The specific aim of the consultation would be to:

• Gather your general views on the use of AI labelling schemes and initiatives, and codes of conduct, for the ethical and safe development of AI
• Identify potentially relevant literature on this topic that we should consider (this can include academic literature or relevant reports from organisations)
• Identify organisations and individuals who might be able to provide us with more information about AI labelling initiatives and codes of conduct
Year(s) Of Engagement Activity 2021
 
Description Dr Koene chair - panel session: IEEE SCCI 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Industry/Business
Results and Impact Dr Koene was invited to participate in the IIEEE SSCI 2021 symposium - an event to : To provide academia and industry with an overview of the current ethical AI landscape to discuss and debate how to establish sustainably trustworthy and responsible research and innovation.

Discussion themes include (but are not limited to):

The impact of emerging global legislation on data and AI ethics on conducting AI research and innovation.
Responsibility and Accountability in the AI decision making
Algorithm bias in machine learning
Can Industry Self-regulation Deliver 'Ethical AI'?
Sustainable and Responsible AI
Citizen involvement in conceptualisation of AI products and services to build trust.
Year(s) Of Engagement Activity 2021
URL https://attend.ieee.org/ssci-2021/panel-session-using-ai-to-establish-sustainably-trustworthy-and-re...
 
Description Dr Koene invited to speak at Data Governance Day hosted by Ernst Young 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Dr Ansgar Koene presented at a moderated session on Online safety and it's impact on Children hosted by Ernst Young (late January). The panel consisted of a representation from Childnet International who used the occasion to advertise Safer Internet Day to the DPO's and compliance officers from various industry attendees
Year(s) Of Engagement Activity 2021
 
Description Dr Koene presented at a Facial Recognition Technology: Challenges for International Collaboration & Governance' workshop run by the Utrecht University 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact Dr Ansgar Koene presented 'Developments for AI Governance through Regulation and Standards: Globally Coordinated Deliberative Approaches vs. Reactive Policy Making in the session Global policies on FR technology and related international collaboration session on 17th November. The event was online live stream: https://livestream.acsaudiovisual.com/uu20211117
Year(s) Of Engagement Activity 2021
URL https://livestream.acsaudiovisual.com/uu20211117
 
Description Dr Koene provided comments to support an article in Protocol.com 
Form Of Engagement Activity A press release, press conference or response to a media enquiry/interview
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact Dr Ansgar Koene was interviewed by Protocol - a media company- for a news article related to Google's firing of leading AI ethics researcher Timnit Gebru.
https://www-protocol-com.cdn.ampproject.org/c/s/www.protocol.com/amp/timnit-gebru-fired-ethics-google-2649129371
Year(s) Of Engagement Activity 2020
URL https://www-protocol-com.cdn.ampproject.org/c/s/www.protocol.com/amp/timnit-gebru-fired-ethics-googl...
 
Description ESRC Annual Festival of Social Sciences, Nottingham 2018 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Public/other audiences
Results and Impact "Do you trust internet platforms?" Participatory and hands-on activity to help internet users to explore their online experiences when interacting with algorithm driven platforms. We run two workshops for people aged 16-25 and 65 years old and over, where a total of 17 participants took part.
This activity constituted a pilot study, part of an interdisciplinary research project between the Universities of Nottingham, Oxford and Edinburgh called ReEnTrust, which explores new technological opportunities to enhance/re-build user's trust, and identify how this may be achieved in ways that are user-driven and responsible. Preliminary data collected from this study has been extremely valuable and has contributed to refine particular research topics within our current project (ReEnTrust). Also, participants from this activity expressed their will to contribute to follow-up interviews and also to be part of the ReEnTrust advisory group.
Year(s) Of Engagement Activity 2018
URL https://www.horizon.ac.uk/reentrust-call-for-participants/
 
Description Ethical Hackathon 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Postgraduate students
Results and Impact An 'ethical hackathon' is a novel event developed by the Human Centred Computing group at the University of Oxford. It takes a twist on the traditional hackathon format by challenging groups of participants to identify ways to embed ethical considerations into the processes of technical design and development. The event in October 2020 involved postgraduate students at Horizon CDT University of Nottingham, and due to COVID-19 restrictions it was run online on the 22nd and 23rd. At the start of the session Dr Liz Dowthwaite (facilitator) gave an introduction about algorithmic controversies and a summary of the key work within UnBias, ReEnTrust and TASHub within Horizon and, divided the students in 4 groups and invited them to discuss issues connected to algorithmic discrimination, bias and governance. After activities were completed, students were asked to work in their groups and set a design challenge to work on. The following day the teams presented their designs in front a panel of judges. The judges gave feedback on the presentations and prizes were awarded.
Year(s) Of Engagement Activity 2020
 
Description Ethical Hackathon 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Postgraduate students
Results and Impact An 'ethical hackathon' is a novel event developed by the Human Centred Computing group at the University of Oxford. It takes a twist on the traditional hackathon format by challenging groups of participants to identify ways to embed ethical considerations into the processes of technical design and development. The event in October 2019 involved postgraduate students at Horizon CDT University of Nottingham, in a day long session. At the start of the session leaders Liz Dowthwaite and Menisha Patel introduced key project themes and invited students to discuss issues connected to algorithmic controversies and governance. After initial themes had been introduced and activities completed, students were put into groups and set a design challenge to work on. At the end of the session the teams presented their designs in front a panel of judges. The judges gave feedback on the presentations and prizes were awarded for the best designs.
Year(s) Of Engagement Activity 2019
 
Description Ethical Hackathon (two day- long events- Oxford and Nottingham) 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Postgraduate students
Results and Impact An 'ethical hackathon' is a novel event developed by the Human Centred Computing group at the University of Oxford. It takes a twist on the traditional hackathon format by challenging groups of participants to identify ways to embed ethical considerations into the processes of technical design and development. The UnBias team ethical hackathons in November 2018 and January 2019 involving postgraduate students at Horizon CDT University of Nottingham & the Cyber Security CDT University of Oxford. These took the form of a day long session. At the start of the session leaders Helena Webb and Menisha Patel introduced key project themes and invited students to discuss issues connected to algorithmic controversies and governance. During the Nottingham Ethical Hackathon discussion was also facilitated by Liz Dowthwaite, After initial themes had been introduced and activities completed, students were put into groups and set a design challenge to work on. At the end of the session the teams presented their designs in front a panel of judges from the UnBias team. The judges gave feedback on the presentations and prizes were awarded for the best designs.
Year(s) Of Engagement Activity 2018,2019
 
Description Ethical Hackathon' 21 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Postgraduate students
Results and Impact An 'ethical hackathon' is a novel event developed by the Human Centred Computing group at the University of Oxford. It takes a twist on the traditional hackathon format by challenging groups of participants to identify ways to embed ethical considerations into the processes of technical design and development. The event took place on the 20th and 21st of October 2021, involved postgraduate students at Horizon CDT University of Nottingham, and was facilitated by Dr Liz Dowthwaite and Dr Helena Webb (Horizon Institute, UoN) who gave an introduction about algorithmic controversies and a summary of the key work within UnBias, ReEnTrust and TASHub within Horizon and, divided the students in 4 groups and invited them to discuss issues connected to algorithmic discrimination, bias and governance. After activities were completed, students were asked to work in their groups and set a design challenge to work on. The following day the teams presented their designs in front a panel of judges. The judges: Dr Virginia Portillo (Horizon, UoN), Dr James Sprinks (Earthwatch Europe) and Dr Laurence Brooks (DMU) gave feedback on the presentations and prizes were awarded.
Year(s) Of Engagement Activity 2021
 
Description Ethics of AI panel at the Oxford UIDP summit 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Industry/Business
Results and Impact Marina Jirotka was a guest speaker on a panel discussing AI ethics. This was a special panel session run at the 2019 Oxford UIDP summit, which brought together high level staff in industry, academia and policy. As part of the panel she spoke about the RoboTIPS project and opportunities for university-industry-policy collaboration in responsible innovation.
Year(s) Of Engagement Activity 2019
URL https://uidp.org/event/oxford_uidp_summit/
 
Description Festival of Science and Curiosity (FOSAC), Nottingham City Library 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Public/other audiences
Results and Impact Drop-in sessions to introduce people to the UnBias Awareness Cards and to discuss issues of online fairness. This activity was part of a wider outreach event, and run as part of the Impact Exploration Grant Award. We engaged with approximately 50 people, mostly families. Three decks of cards were requested from people working who worked in the Education Sector.
Year(s) Of Engagement Activity 2019
URL https://www.horizon.ac.uk/13229-2/
 
Description Hosting Webinar on rebuilding and enhancing trust in algorithms 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Policymakers/politicians
Results and Impact On Wednesday 2nd December 2020, the All-Party Parliamentary Group on Data Analytics and the ReEnTrust were delighted to host an online briefing on rebuilding and enhancing trust in algorithms. The event was chaired by APGDA Chair, Daniel Zeicher MP, and included contributions from a number of senior researchers from the ReEnTrust following the publication of their recent policy paper. The event opened up discussions with attendees regarding how data trust issues relate as much to the platform or institution as they do to the technology used. The event was followed up by a formal policy report launched in early 2021.
Year(s) Of Engagement Activity 2020
URL https://www.policyconnect.org.uk/news/apgda-and-reentrust-host-webinar-rebuilding-and-enhancing-trus...
 
Description IEEE P70XX Working Groups writing sessions 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact 3-Day meeting of delegates from all the IEEE P70xx working group members, to interact with different working groups and carry out intensive writing sessions for the standards.
Year(s) Of Engagement Activity 2019
 
Description Institute for Policy and Engagement launch- University of Nottingham. 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Professional Practitioners
Results and Impact The aim of the event was to raise the profile of the Institute with colleagues, particularly academics. In particular, it was an opportunity for academics colleagues already engaged in policy and public engagement work to share their work with their peers.

Members of the UnBias/ReEnTrust team were invited to be one of the policy impact/public engagement stars of the event, and a slide summarising with our main policy impact/public engagement outputs was on display during the event. Our work was also highlighted by Professor Dame Jessica Corner during her presentation, being one of the 4 policy impact/public engagement stars out of 29 exhibited on the event.
Year(s) Of Engagement Activity 2019
URL http://blogs.nottingham.ac.uk/researchexchange/2019/02/12/institute-for-policy-and-engagement-launch...
 
Description Interview conducted with Dr Ansgar Koene leading to a feature in an article in L'intelligence artificielle à l'heure de la transparence algorithmique | Les Echos | 09/03/2020 
Form Of Engagement Activity A press release, press conference or response to a media enquiry/interview
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact Dr Koene was approached by a journalist from L'intelligence artificielle à l'heure de la transparence algorithmique | Les Echos and provided an interview which featured in a published article 9/3/2020
Year(s) Of Engagement Activity 2020
URL https://www.lesechos.fr/idees-debats/sciences-prospective/lintelligence-artificielle-a-lheure-de-la-...
 
Description Invited departmental seminar on "The Road to Safe and Trusted AI" at Kings College London 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Other audiences
Results and Impact An invited seminar was given to an academic staff and student audience which prompted further involvement in their activities, e.g. invitations to advisory boards in their research projects and as a speaker at their CDT summer school.
Year(s) Of Engagement Activity 2020
 
Description Invited talk Toward Ethical AI (not more AI ethics), Situated Computing and Interaction Lab, University of Muenster, Germany, May 2019 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Postgraduate students
Results and Impact Presented an introductory talk on ethical AI to interdisciplinary group of around 40 postgraduate students and researchers.
Year(s) Of Engagement Activity 2019
 
Description Invited talk at Edinburgh Napier University, Edinburgh, UK, November 2019 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Professional Practitioners
Results and Impact Gave talk on Democratic Self-regulation and Fairness in Future Cyber Societies at School of Computing Seminar to audience of about 20, which led to further discussions on future research collaborations.
Year(s) Of Engagement Activity 2019
 
Description Invited talk at Mishcon de Reya LLP, London 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Industry/Business
Results and Impact An invited presentation at a London law firm, Mishcon de Reya LLP, talking about algorithm and trust. 30-50 legal practitioners were present in the 2-hour event, participating in a practical session and debate about algorithms and trust, particularly from a legal perspective.
Year(s) Of Engagement Activity 2019
 
Description Invited talk at the AI@Oxford 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Industry/Business
Results and Impact I was invited to give a presentation at the track of "Impact of Trust in AI" at AI@Oxford'2019. The talk was attended by more than 50 participants from various sectors. The talk was followed up by several requests for further information from industrial sectors.
Year(s) Of Engagement Activity 2019
URL https://innovation.ox.ac.uk/innovation-news/events/aioxford-conference/conference-agenda/
 
Description Invited talk on Algorithmic Fairness, BHCC Symposium, Sheffield, UK, October 2019 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Gave invited talk in Symposium on Biases in Human Computation & Crowdsourcing to an interdisciplinary audience of around 40 academics and postgraduate students.
Year(s) Of Engagement Activity 2019
URL https://sites.google.com/sheffield.ac.uk/bhcc2019/program
 
Description Invited talk on Ethical Design of AI Systems at Asser Institute Winter Academy AI and International Law, The Hague, The Netherlands 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Gave an introductory talk to ethical AI and AI ethics to legal/public policy audience at international research school as invited speaker.
Year(s) Of Engagement Activity 2020
URL https://www.asser.nl/about-the-institute/asser-today/save-the-date-2020-winter-academy-on-artificial...
 
Description Keynote Speaker to Royal College Psychiatrists Annual Conference. Talk title 'Internet Adiction or Persuasive Design' 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact After the presentation I was invited to contribute to the RCP Report titled 'Technology use and the mental health of children and young people'
Year(s) Of Engagement Activity 2019
 
Description Keynote at Coalesce 21 conference 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Other audiences
Results and Impact Dr Ansgar Koene was invited to give a keynote at the Coalesce 21 Conference of the Goa Intsitute of Management
Year(s) Of Engagement Activity 2021
URL https://dare2compete.com/o/algorithmic-bias-diversity-and-inclusion-webinar-by-mr-ansgar-koene-ernst...
 
Description Lets Talk About Tech Campaign video 
Form Of Engagement Activity Engagement focused website, blog or social media channel
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Public/other audiences
Results and Impact Aligning with Children's Mental Health Week (this week) and running up to Internet Safety Day on the 11th February, we are running a campaign to showcase our visionary research around internet safety, algorithms, and the impact on children's mental health and wellbeing. Digital mental health is a growing area of research at Nottingham, and in addition to this campaign supporting REF, it's an area we are looking to secure more funding in. Our research has already influenced policy with the introduction of the Age Appropriate Design Code, and we aim to continue to be influential in this space.
Year(s) Of Engagement Activity 2020
URL https://twitter.com/UoNresearch/status/1224991085656793095?s=20
 
Description Massive Open Online Course on Data Ethics, AI, and Responsible Innovation 
Form Of Engagement Activity Engagement focused website, blog or social media channel
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact Co-led and contributed to content of a new online course that introduces a broad (non-technical and technical) audience to data ethics, AI, and responsible innovation. The course addresses key application domains where algorithmic systems are making important decisions in healthcare, policing, finance, and smart living scenarios, and also provides a practical introduction to basic ethical concepts and frameworks.
Year(s) Of Engagement Activity 2020
URL https://www.edx.org/course/Data-Ethics-AI-and-Responsible-Innovation?utm_source=Data-Ethics-Twitter-...
 
Description Navigating through Uncertainty and Unawarness - Blog for eNurture Network+ 
Form Of Engagement Activity Engagement focused website, blog or social media channel
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Blog to signal Internet Safety Day
Year(s) Of Engagement Activity 2020
URL https://www.enurture.org.uk/blog/2020/2/5/navigating-through-uncertainty-and-unawareness
 
Description Organising/Chairing Panel Discussion at HealTAC conference, Cardiff April 24-25 2019 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact I was asked to organize a Panel Discussion at HealTAC conference titled: Natural language processing in mental health: progress, challenges and opportunities.
Year(s) Of Engagement Activity 2019
 
Description Panel on "Is Consent Broken?" organised by the Digital Marketing Association 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Industry/Business
Results and Impact Participation at panelist in online webinar organised by the DMA for their members.
Year(s) Of Engagement Activity 2020
 
Description Panel on Digital Markets and Consumer Welfare, Competition and Markets Authority Consumer Detriment Symposium, Edinburgh, September 2019 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Policymakers/politicians
Results and Impact Participated in discussion at event organised by key national regulator, this led to follow-up events and exploration of further collaboration with the CMA.
Year(s) Of Engagement Activity 2019
 
Description Panel on Ethical Implications of AI, CyberUK 2019 Conference, Glasgow, April 2019 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Industry/Business
Results and Impact Contributed to panel at national cybersecurity industry and government event.
Year(s) Of Engagement Activity 2019
URL https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=2ahUKEwiQ_LmQ6oroAhU0pnEKHbu6BBU...
 
Description Panel on Fairness in AI at Canada-UK-France Trilateral AI Ethics Workshop, London, June 2019 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Participated in panel held at Alan Turing Institute to foster trilateral collaboration in ethical AI, which led to further engagement with French and Canadian stakeholders.
Year(s) Of Engagement Activity 2019
URL https://www.turing.ac.uk/events/cifar-ukri-cnrs-ai-society-principles-practice
 
Description Panel on ethical issues around data at "Beyond" conference, Edinburgh, November 2019 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Professional Practitioners
Results and Impact Participated in panel on "Dark Data: Bias, Trust, and Inclusion" in major creative industries conference. This led to future requests for information and engagement from practitioners in the industry.
Year(s) Of Engagement Activity 2019
URL https://beyondconference.org/agenda
 
Description Panel on ethical issues at RBS DataFest industry event, Edinburgh, November 2019 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Industry/Business
Results and Impact Participated in panel on Fairness, Data, and Ethics at annual Royal Bank of Scotland conference, which led to further development of collaboration with them.
Year(s) Of Engagement Activity 2019
 
Description Panel on failures and biases in AI, We Need to Talk About AI Talks, Edinburgh, November 2019 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Postgraduate students
Results and Impact Participated in interdisciplinary panel discussion on biases in AI systems as part of student-led event.
Year(s) Of Engagement Activity 2019
URL https://www.ed.ac.uk/informatics/news-events/public/we-need-to-talk-about-ai/to-err-is-machine
 
Description Participantion in the Oxford Idea Festival 2019 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Public/other audiences
Results and Impact The key goal of the event is to raise the awareness of these issues by the general public by exhibiting in a local shopping centre in East Oxford and reaching out to an audience who would normally be hard to reach otherwise. The exhibition was well-attended by several hundred people on the day.
Year(s) Of Engagement Activity 2019
URL http://if-oxford.com
 
Description Participation as panelist in online debate on "Operationalising AI Ethics" at The Algo2020 Conference 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Industry/Business
Results and Impact Michael Rovatsos participated in a high-profile panel with key experts at online conference.
Year(s) Of Engagement Activity 2020
URL https://www.thealgo.co/
 
Description Participation in an Ada Lovelace Panel and Q&A session 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Postgraduate students
Results and Impact Took place in a UoN Computer Science Ada Lovelace Day (27.11.2019) - disseminated UnBias Awareness cards among audience and established new contacts for new (ReEnTrust) project
Year(s) Of Engagement Activity 2019
 
Description Pilot survey for the Algorithm Playground 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Public/other audiences
Results and Impact We organised a survey prior to the Algorithm Playground large experiment to test the understandability and the time required to complete the survey. This took place during the Edinburgh Festival so we attended a venue and asked to random people to complete the survey questionnaire.
Year(s) Of Engagement Activity 2019
 
Description Policy Connect Stakeholder engagement workshop 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Professional Practitioners
Results and Impact Dr Ansgar Koene was invited to attend a closed stakeholder engagement workshop in October chaired by Eve Lugg, Policy Manager for Data at Policy Connect: https://www.policyconnect.org.uk/

The roundtable will discuss the recently launched consultation on post-brexit reforms to the UK's data protection regime, with a specific focus on responsible innovation.

"Following the recent publication of the consultation on reforms to create an ambitious, pro-growth and innovation-friendly data protection regime that underpins the trustworthy use of data, we are hosting a roundtable to discuss this in relation to responsible innovation. Considering the UK's need to introduce agile and adaptable data protection laws that maintain high data protection standards without creating unnecessary barriers to responsible data use, we think that responsible innovation needs to be central to the conversation. Given the above, we would like to invite you to the roundtable to partake in what will be a fascinating discussion. "
Year(s) Of Engagement Activity 2021
URL https://www.policyconnect.org.uk/
 
Description Presentation at AI Ethics in the Financial Sector Conference, The Alan Turing Institute, London, July 2019 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Industry/Business
Results and Impact Gave talk to around 80 senior representatives of financial industry, government, and third sector at national meeting organised by the Alan Turing Institute. This led to key contacts for future engagement with the industry.
Year(s) Of Engagement Activity 2019
URL https://www.turing.ac.uk/events/ai-ethics-financial-sector
 
Description Presentation at Royal Institution Designing the Future. 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Industry/Business
Results and Impact On Thursday 13 February, Designing the Future was hosted at the Royal Institution in London, attended by more than 250 guests and broadcasted live. It featured a series of interactive Christmas-lectures-style lectures followed by conversation and debate. Prof Marina was one of the guest lecturers who looked at the future quantum computing.
Year(s) Of Engagement Activity 2020
URL https://hoarelea.com/2020/02/19/designing-the-future-our-national-event/
 
Description Presentation at the Cityforum Policing Summit 2020 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Policymakers/politicians
Results and Impact An invited presentation at the Cityforum Policing the Nation Round Table 2020, to discuss ethical issues related to data sharing, data access and data analytics. The event was attended by 250 attendees from a range of sectors, including government, academia as well as industry.
Year(s) Of Engagement Activity 2020
 
Description Presentation on "Data and AI Ethics" at HSBC Compliance Week 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Industry/Business
Results and Impact Invited presentation to contribute a presentation to HSBC's international week-long "Compliance Week" targeted at compliance teams within the bank.
Year(s) Of Engagement Activity 2020
 
Description Presentation on "Data and AI Ethics" to RBS NatWest audience 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Industry/Business
Results and Impact A wide range of marketing and consumer engagement teams from RBS NatWest attended this online presentation which led to an interesting discussion.
Year(s) Of Engagement Activity 2020
 
Description Presentation on "Data and Diversity" at FCA Diversity Day 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Industry/Business
Results and Impact Gave online presentation about diversity, algorithms, and ethics to a wide audience from the Financial Conduct Authority.
Year(s) Of Engagement Activity 2020
 
Description Presentation to Institute of Practitioners in Advertising, Edinburgh, December 2018 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Industry/Business
Results and Impact Gave introductory presentation titled "AI and you" to introduce professionals from the advertisement industry to AI and ethical issues in AI.
Year(s) Of Engagement Activity 2019
 
Description Presentation to Scottish Investment Operations on "Demystifying AI", Edinburgh, October 2019 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Industry/Business
Results and Impact Gave a presentation introducing data science, AI, and ethics concepts to do financial industry audience of around 60. Very well received and established longer term collaboration with the organisation.
Year(s) Of Engagement Activity 2019
URL https://www.sio.org.uk/news/sio-event-28-october-demystifying-data
 
Description Project Policy Brief Launch 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Policymakers/politicians
Results and Impact On the 24th May the APPG on Data Analytics hosted an online roundtable on rebuilding trust in algorithms, in collaboration with the ESPRC ReEnTrust project. Two central points of the meeting were discussed in the meeting, including 1) what are the effective advocacy methods we have used as a sector so far to ensure best practice in algorithmic transparency? and 2) how do we improve engagement strategies to ensure citizen involvement throughout the lifecycle of policy design? It was chaired by Daniel Zeichner MP, Chair of the APGDA (Labour) and Professor Tom Rodden (Chief Scientific Advisor DCMS), Professor Marina Jirotka (Oxford University) and Jonathan Legh-Smith (Head of Scientific Affairs at BT Group) provided keynotes. It was attended by ~20 academics from different institutions and ~10 policymakers. The launch event sparked deeper conversations regarding policy changes needed for increasing public trust in algorithmic systems.
Year(s) Of Engagement Activity 2021
URL https://www.policyconnect.org.uk/news/rebuilding-trust-algorithms-roundtable-appg-data-analytics
 
Description ReEnTrust Advisory Group 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Public/other audiences
Results and Impact We established the ReEnTrust Advisory Group (AG) aimed to the co-creation of research activities/resources within this project.
It is composed of a total of twelve people, eight aged between 16-25 years old and four aged 65 and over. In 2019 we had 3 AG meetings before running the first set (wave 1) of research workshops, which allowed good co-creation with the AG and fine tuning details of the scenarios used on the wave 1 workshops.
Members' contribution has been extremely useful with participants sharing lots of very relevant points that were all incorporated in the follow write up of scenarios used on our research workshops. Even more, one young member offered to do the mock ups of the screen shots used on those scenarios.
We are currently planing wave 2 workshops and have run an initial brainstorm session with the AG and will be running more co-creation sessions in 2020.
Since March 2020 due to COVID-19 restrictions our project activities were all online. We engaged with the Advisory Group during 2020 when running online studies and through a the Research progress update Newsletters we prepared in July and December 2020.
Year(s) Of Engagement Activity 2019
 
Description Responsible Innovation panel AI@Oxford conference 2019 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Industry/Business
Results and Impact Marina Jirotka was invited to talk on an expert panel at the AI@Oxford conference 2019. The panel discussed opportunities to embed responsible innovation into the development of AI supported technologies. Marina Jirotka spoke about the RoboTIPS project as an instance of a project that seeks to foster responsibility in technology across phases of design, development and implementation.
Year(s) Of Engagement Activity 2019
URL https://innovation.ox.ac.uk/innovation-news/events/aioxford-conference/conference-agenda/
 
Description Responsible Technologies Workshop (Virtual) 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Postgraduate students
Results and Impact The increasing reach and pervasiveness of AI and algorithms into everyday life raises pressing social and ethical issues for individuals and communities. Several recent deliberations, including track-and-trace for the pandemic and fight for a fairer society, have raised the urgency for critical thinking about our society and technology innovations now more than ever. Under these tremendous circumstances, we hear a great need from Oxford researchers and students for a platform to voice and exchange our concerns, and reflect and react to the urgent need for more inclusive and responsible algorithms and technologies.

The first virtual ReEnTrust Responsible Technology workshop was a direct response to this call for actions. With additional support from OxAI and Oxford Business Network for Technology, we welcomed 18 participants from 5 different Oxford faculties/departments, including Oxford CS, International Development, Law, Geography and particularly the Said Business School.

The two-hour virtual workshop was opened by two informative presentations by Dr Philip Inglesant (Oxford CS) and Dr Ansgar Koene (EY/Nottingham University). The former laid down the foundation for our core interactive group break-out session by introducing the widely-adopted Responsible Innovation AREA framework, and the latter provided well-rounded insights about the motivations and challenges that the industry is facing for adopting the RI approach.

Led by Dr Menisha Patel (Oxford CS), we enjoyed a productive 45-minute virtual group breakout session to critique and reflect a case study related to racial biases in algorithms. Participants were encouraged to make use of the AREA RI framework to anticipate the benefits and implications of the algorithm, reflect thoughtfully the direct and indirect stakeholders that are affected by the algorithms and how they should have been engaged to ensure the fairness of the algorithm, and finally envision how a future algorithm and its development could be like.

The final plenary discussions demonstrated participants' shared concerns of various factors in relation to algorithm design and regulation: including the lack of engagement to those stakeholders that directly affected by these algorithms (such as judges and the arrested people), the need for independent review and monitoring of the use of the algorithm, and particularly the data used for training the algorithm. More fairness and transparency is strongly demanded, which should not only be strengthened by technical solutions but also a thorough and continuous review and monitor of the use and impact, and an establishment of a process for the adoption of such critical technologies in any public or private sectors.

The key success of this workshop has not only been the insightful collective findings, but it has been extremely encouraging to see how our participants from diverse backgrounds were able to identify the broader set of issues and challenges associated with responsible innovation.
Year(s) Of Engagement Activity 2020
URL https://reentrust.org/2020/06/18/responsible-technologies-workshop/
 
Description STEM ambassador 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Public/other audiences
Results and Impact I am a STEM ambassador as part of the scheme run by STEM Learning. I take part in STEM engagement events under this scheme.
Year(s) Of Engagement Activity 2018,2019,2020
 
Description The Human Bias in AI. Change Forum: Product and Data, London 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Media (as a channel to the public)
Results and Impact Interactive workshop mainly aimed to media industry stakeholders. The aim of the session was to build understanding and promote discussion around the use of algorithms in online platforms. Members of our team run activities using the UnBias Awareness Cards, designed to encourage critical and civic thinking for exploring how decisions are made by algorithms, and the impact that these decisions may have on our lives and the lives of others. The audience engaged in interesting discussions and the facilitators from our research team were invited to take part in future related activities (RCUK workshops).
Year(s) Of Engagement Activity 2019
URL https://www.changeforum.co/
 
Description The Internet and You - widening participation 
Form Of Engagement Activity Participation in an open day or visit at my research institution
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Public/other audiences
Results and Impact This was an opportunity to showcase our work 'the internet and you' at an event organised by the University of Nottingham - our audience was mainly primary school aged children
Year(s) Of Engagement Activity 2019
 
Description Trust breaching workshop with hotel booking scenario 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Public/other audiences
Results and Impact The workshops from the first wave involved a trust breaching task, whose purpose was to make participants book a hotel with the fake booking website we implemented and check at which point they realise it is fake, leading them to lose trust. This experiment helped us to identify trust break points and the impact of different commercial policies on them.
Year(s) Of Engagement Activity 2019
 
Description Tutorial on AI Ethics at International Joint Conference on AI (IJCAI 2019), Macau, Aug 2019 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Presented a three-hour introductory tutorial at one of the largest international AI conferences to a mostly technical research/professional/student audience. This was attended by around 60 people.
Year(s) Of Engagement Activity 2019
URL https://www.ijcai19.org/tutorials.html
 
Description Tutorial on AI Ethics, Advanced Course on AI (ACAI 2019), Chania, Greece, July 2019 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact Gave four-hour introductory tutorial to an audience of around 40 at international research school.
Year(s) Of Engagement Activity 2019
URL http://acai2019.tuc.gr/?page_id=489
 
Description University of Nottingham Vision magazine - Safe Space 
Form Of Engagement Activity A magazine, newsletter or online publication
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Media (as a channel to the public)
Results and Impact Safe space
Researchers are placing the views and experiences of young people at the heart of policy, making the internet a safer place

This publication is for the purpose of disseminating research conducted at the University of Nottingham to a wider - both academic and non academic audience
Year(s) Of Engagement Activity 2019
URL https://www.nottingham.ac.uk/vision/safe-space
 
Description Vision - Safe Space 
Form Of Engagement Activity A magazine, newsletter or online publication
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Public/other audiences
Results and Impact Interview (by Dr Elvira Perez) and subsequent article for University of Nottingham Vision Magazine to highlight the present and historical research activity addressing protecting young people and children online (placing the views and experiences of young people at the heart of policy, making the internet a safer place).
Year(s) Of Engagement Activity 2019
URL https://www.nottingham.ac.uk/vision/safe-space
 
Description Visit to DCMS 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Policymakers/politicians
Results and Impact Researches at the University of Nottingham were invited, together with CDT students, to DCMS to provide presentations and expert advice on:

Monetising Services - Joseph Hubbard-Bailey, Kate Green
• How do companies providing a 'free' service make money?
• How is data monetised?
• How do cookies and advertising work?

Moderating online services - Ansgar Koene, Elvira Perez Vallejos, Liz Dowthwaite
• How does China prevent access to sites?
• How does AI (AI and other things like machine learning) work? How is it used to moderate content?
• How does the process of trusted flaggers work?

Encryption - Derek McAuley
• How does encryption, including end-to-end encryption affect the ability to identify harmful content?
• Where are online services going in terms of encryption and what risks does this bring?
• What is DOH and how might it impact the ability of companies/regulators to moderate content?
Year(s) Of Engagement Activity 2019
 
Description Workshop to 3rd Age University 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Third sector organisations
Results and Impact We provided an overview of Horizon research which included hands on activities to approximately 30 members of the 3rd Age University
Year(s) Of Engagement Activity 2019