Fairness in AI for Cardiac Imaging
Lead Research Organisation:
King's College London
Department Name: Imaging & Biomedical Engineering
Abstract
Artificial Intelligence has been shown to reflect biases seen in the wider world. For example, in the field of computer vision, algorithms classifying sex from facial images were shown to have decreased performance on Black people, particularly Black females. Similar algorithmic biases can be found in models used on medical imaging datasets such as the UK Biobank and CheXpert datasets. For example, one model's segmentation accuracy on cardiac Magnetic Resonance (CMR) images was shown to be reduced for ethnic minorities, known as a protected groups, when trained using the UK Biobank dataset which is racially imbalanced. These differences in performance can be partially attributed to population bias present in the datasets as they are often imbalanced and unrepresentative of the population they will be deployed on. Several other biases can exacerbate these performance differences such as the use of proxy variables, like postcode, as inputs which are correlated with protected attributes, characteristics that cannot be discriminated against such as race and sex. This disparate treatment, along with the fact that the models are usually not interpretable, will prevent these models from being used in clinical practice, particularly if they are used to aid diagnosis. Therefore, my work aims to firstly quantify sex and gender biases in models used for CMR image classification and segmentation. Following this, bias mitigation methods can be developed and applied to the models to reduce the biases in the model performance.
Buolamwini, J. Gebru, T. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. (2018)
Puyol-Antón, E. et al. Fairness in Cardiac MR Image Analysis: An Investigation of Bias Due to Data Imbalance in Deep Learning Based Segmentation. (2021)
Mehrabi, N. et al. A survey on bias and fairness in machine learning. (2019)
Buolamwini, J. Gebru, T. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. (2018)
Puyol-Antón, E. et al. Fairness in Cardiac MR Image Analysis: An Investigation of Bias Due to Data Imbalance in Deep Learning Based Segmentation. (2021)
Mehrabi, N. et al. A survey on bias and fairness in machine learning. (2019)
Organisations
People |
ORCID iD |
Andrew King (Primary Supervisor) | |
Tiarna Lee (Student) |
Studentship Projects
Project Reference | Relationship | Related To | Start | End | Student Name |
---|---|---|---|---|---|
EP/T517963/1 | 30/09/2020 | 29/09/2025 | |||
2604726 | Studentship | EP/T517963/1 | 30/09/2021 | 13/05/2025 | Tiarna Lee |