Emotional AI: Comparative Considerations for UK and Japan across Commercial, Political and Security Sectors

Lead Research Organisation: Bangor University
Department Name: Sch of Music & Media


Emotional AI comprises affective computing and AI techniques to sense human emotional life. Using weak AI rather than strong AI, machines read and react to emotions via text, images, voice, computer vision and biometric sensing.

Emotional AI is an emergent phenomenon appearing across diverse devices and life contexts. Examples include: city sensing such as ads that analyse faces for emotional reactions; analysis of sentiment on social media (used for policing, political campaigning, marketing); wearables that track moods of workers; and human-machine interaction (eg home assistants). A leading technology analyst, Gartner, predicts that by 2022 personal devices will know more about a person's emotional state than their family.

Emotional AI raises important social questions, esp. on how data about online feeling, bodily affects, physiological expression, attention and intention should be used. Harms include: privacy violations, unauthorised uses of data about emotions, visibility of private thoughts, hidden influence, lack of scope to opt-out of monitoring of emotional behaviour, and impact on self-conception of emotional identity.

To bring together in interdisciplinary fora 40 UK and Japanese academics, industry, NGOs and regulators to better understand the constitution, and commercial/civic/security implications, of a world where emotional AI is increasingly central. Through this network, we will identify pressing cross-cultural research projects on emotional AI, sensitive to culture, policy, ethics, technological trends and industrial context.

We will hold workshops in Japan (2 x 2-day) and London (1 x 1-day). Each involves inter-disciplinary academics, NGOs, policy-makers, industry and technologists (15-20 high value people/workshop drawn from Japan/UK):
- Japan Workshop 1 (Emotional AI - Commerce & Civic Life) will focus on the emergence of emotional AI, and its commercial and civic dimensions.
- Japan Workshop 2 (Emotional AI - Security & Civic Life) will examine the role of emotional (and wider) AI, in security, predictive policing and surveillance.
- UK Workshop 3 (Emotional AI & Cross-Cultural Considerations) will brief UK academics/stakeholders (incl. start-ups) on what has been learnt from our interactions in Japan, providing insights and cross-cultural considerations of emotional AI and its application to commercial, civic and security issues. This workshop will be held at Digital Catapult, the UK's leading agency for early adoption of advanced digital technologies.

Through the 3 workshops, we will meet stakeholders in Japan/UK who can be brought in on grant ideas.

Following the workshops, we will develop grant bids to UK funders. To achieve this, the UK-based Principle Investigator (PI) and Co-Investigator (Co-I) will spend 26 days in Japan; the UK-based Early Career Researcher (ECR) will spend 21 days in Japan; 2 other UK-based academics will each spend 6 days in Japan; and the Japan-based International Co-I (ICo-I) will spend 19 days in UK.

- 3 workshops (2 in Japan, 1 in UK): Japan workshops will have simultaneous bilingual translation.
- New grant applications to AHRC, ESRC, EPSRC or Innovate UK with Japan ICo-Is and UK/Japan end users.
- Bilingual website to inform/attract participants/stakeholders.
- Bilingual blog entries written by workshop participants to provocations posed by workshop leaders.
- A written report on insights/recommendations.

We will:
- Raise UK/Japanese stakeholder awareness (technology industry, policy-makers, NGOs) of the need to understand, and be involved in, cross-cultural (UK-Japan) interdisciplinary research on emotional AI and its commercial/civic/security implications.
- Establish a deep collaborative UK-Japan interdisciplinary, multi-end user network on emotional AI, building cross-national research capacity.

Planned Impact

This project's key output is new grant applications. In devising these, we will embed co-created RQs, project ideas and impact pathways via end user participation in 3 Workshops. The team is experienced in such activities via PI/Co-Is' AHRC, ESRC, EPSRC and Arts Council grants, and ICo-I's grants from Japan Society for Promotion of Science (JSPS), Canada Arts Council (CAC) and Quebec Council for the Arts (QAC).

Ethical growth of the emotional AI sector requires meaningful engagement with ethnocentric social, normative, industrial and legal issues to promote beneficial over harmful applications. By raising these issues in our new network and embedding them in our grant applications, this project will positively impact the following stakeholders:
- Diverse companies involved in emotional AI (incl. CEOs, privacy officers, Data Protection Officers). UK engagement is assured by the PI's wide network in this area. UK companies active in emotional AI in the UK include the largest, e.g. IBM, to start-ups; and in Japan include Honda and Fujitsu.
- Policy makers who regulate data. In the UK this includes: Information Commissioners Office (ICO - the national data protection authority), Ofcom (media regulator) and advertising self-regulators (Committee of Advertising Practice, Internet Advertising Bureau). In Japan this includes the Personal Information Protection Commission (its data protection authority).
- Legislatures: From the UK, this includes House of Lords Committee on AI, House of Lords Joint Committee on Human Rights, House of Commons Digital, Culture, Media, Sport Committee (Fake News Inquiry) and a new Centre for Data Ethics & Innovation (set up in 2018 to enable ethical innovation in AI). From Japan this includes the New Energy and Industrial Technology Development Organisation that created its national AI strategy.
- NGOs: These include privacy-oriented groups. From UK: Open Rights Group (the PI is an advisor), Privacy International, Doteveryone, Don't Spy on Me and Big Brother Watch. From Japan: Privacy International Japan and local activists. We also include groups tackling disinformation online e.g. FullFact (UK), FactCheck Initiative (Japan).

Japan-based end users from industry, policy-making and NGOs will be engaged as follows:
- The team will identify Japan-based end users in the 6 months before the first workshops, inviting them to two 2-day workshops in Japan, one on 'Emotional AI - Commerce & Civic Life' that will present the big picture, and one on 'Emotional AI - Security and Civic Life' that will address emotional AI applications to security and citizenship. All end users from Japan will be invited to participate by providing position statements for the workshop, to be translated and published on the project's blog, and extracts published in the final report. Attendees' T&S for 3 nights/workshop will be reimbursed, to encourage participation in listening to research ideas (day 1) and developing research grant ideas (day 2).
- We will employ simultaneous translators for each workshop in Japan, and we will have the project website translated into Japanese to maximise participation and extend its reach among Japan's stakeholders.

UK-based end users from industry, policy-making, NGOs and legislatures will be engaged via the final workshop (one day) held at Digital Catapult, London, to maximise attendance. We will invite diverse UK end users (drawing on PI/Co-Is' extensive contacts from past research). E.g. the PI has attracted the biggest stakeholders in emotional AI at a past workshop at Digital Catapult on emotional AI ethics, incl. IBM, ICO and European regulators.

This project will benefit technology standards developers with global reach. The PI has excellent contacts at standards groups W3C, Internet Society and IEEE (e.g. the PI is on IEEE Working Group on values-based design for autonomous computing). The PI will share insights from the project via a final report and Skype meetings.


10 25 50