EPSRC DTP Studentship: Uncovering the "Instincts" of Deep Generative Models for Fair and Unbiased Visual Content Creation
Lead Research Organisation:
CARDIFF UNIVERSITY
Department Name: Computer Science
Abstract
The COVID-19 pandemic accelerated the growth of digital economy. Aside from the business-critical remote communication software, it is surprising to see the fast growth of the entertainment industry backboned by visual content generation. For example, there was a 34% surge of UK installation of TikTok for the week when the lockdown was enforced [1]. Such a surge implies a two-fold contribution of visual content generation in fighting COVID-19:
1) It protects people's mental health during the self-isolation, lockdown, and even curfew. Almost all the 66 million people in the UK are affected by the daily upgraded restriction rules [2]. Thus, it is critical to protect their mental health and prevent them from being "the ignored majority".
2) It creates more "contactless" jobs. "I cannot protect every job." said Rishi Sunak [3]. This reveals an urgent demand for new job opportunities. Fortunately, this demand can be met by becoming visual content creators who earn their livings by publishing contents on platforms like Patreon, Youtube and Tiktok.
However, high-quality visual content can be difficult to create. This motivated Artificial Intelligence (AI) to join the game. Nevertheless, ethical concerns arise as deep neural networks, the backbone of modern AI, suffer from the interpretability problem and can be "unconsciously biased". For example, a recent super-resolution method developed by Duke University [4] has a strong racial bias: it converted a low-resolution Obama face to a high-resolution white face [5]. In line with the growing social awareness of BAME+, it is therefore critical to tailor deep generative models for fair and unbiased visual content generation.
Instead of ascribing the biases solely to unbalanced training datasets, we seek an outstanding, talented and ambitious PhD student for carrying out high quality research to fulfil the demands of unbiased AI. Specifically, this project aims to answer three research questions:
1) How to uncover the biases of pre-trained deep generative models?
2) What biases are implicitly introduced during the training process?
3) How to create fair and unbiased deep generative models?
1) It protects people's mental health during the self-isolation, lockdown, and even curfew. Almost all the 66 million people in the UK are affected by the daily upgraded restriction rules [2]. Thus, it is critical to protect their mental health and prevent them from being "the ignored majority".
2) It creates more "contactless" jobs. "I cannot protect every job." said Rishi Sunak [3]. This reveals an urgent demand for new job opportunities. Fortunately, this demand can be met by becoming visual content creators who earn their livings by publishing contents on platforms like Patreon, Youtube and Tiktok.
However, high-quality visual content can be difficult to create. This motivated Artificial Intelligence (AI) to join the game. Nevertheless, ethical concerns arise as deep neural networks, the backbone of modern AI, suffer from the interpretability problem and can be "unconsciously biased". For example, a recent super-resolution method developed by Duke University [4] has a strong racial bias: it converted a low-resolution Obama face to a high-resolution white face [5]. In line with the growing social awareness of BAME+, it is therefore critical to tailor deep generative models for fair and unbiased visual content generation.
Instead of ascribing the biases solely to unbalanced training datasets, we seek an outstanding, talented and ambitious PhD student for carrying out high quality research to fulfil the demands of unbiased AI. Specifically, this project aims to answer three research questions:
1) How to uncover the biases of pre-trained deep generative models?
2) What biases are implicitly introduced during the training process?
3) How to create fair and unbiased deep generative models?
Organisations
People |
ORCID iD |
Yuanbang Liang (Student) |
Studentship Projects
Project Reference | Relationship | Related To | Start | End | Student Name |
---|---|---|---|---|---|
EP/T517951/1 | 30/09/2020 | 29/09/2025 | |||
2599521 | Studentship | EP/T517951/1 | 30/09/2021 | 30/03/2025 | Yuanbang Liang |