DeepEdit++: Towards a continually specialised AI-assistive dataset annotation tool.
Lead Research Organisation:
King's College London
Department Name: Imaging & Biomedical Engineering
Abstract
The use of Deep Learning based segmentation models, trained with medical images, have become increasingly popular to expedite downstream tasks such as disease diagnosis, monitoring and treatment planning. However, the development of these models are typically predicated on large quantities of annotated data, which are not typically available for every
clinical application. This is primarily due to statutes and policies which can limit data sharing, the laborious nature of manual annotation, and the small pool of annotators available due to requirements of clinical expertise. In this work, we propose a dataset annotation tool, DeepEdit++, which builds upon DeepEdit, a tool provided by MONAI Label that
is based upon Active Learning and Interactive Segmentation strategies. DeepEdit++ is a tool that is intended to be robust, efficient and self-configurable to different clinical dataset annotation tasks while remaining practical for modest hardware capabilities. In this work, in line with our outlined objective for DeepEdit++, we build upon DeepEdit's interactive
segmentation application and make some modifications to the data pre-processing pipeline. Our modifications to the interactive segmentation application improve interaction efficacy as measured by improvements in the segmentation performance. Meanwhile, our modifications to the data pre-processing pipeline introduce some adaptability to imaging modality differences in the medical imaging domain. This is outlined by substantially improved segmentation performance in previously unsupported modalities.
clinical application. This is primarily due to statutes and policies which can limit data sharing, the laborious nature of manual annotation, and the small pool of annotators available due to requirements of clinical expertise. In this work, we propose a dataset annotation tool, DeepEdit++, which builds upon DeepEdit, a tool provided by MONAI Label that
is based upon Active Learning and Interactive Segmentation strategies. DeepEdit++ is a tool that is intended to be robust, efficient and self-configurable to different clinical dataset annotation tasks while remaining practical for modest hardware capabilities. In this work, in line with our outlined objective for DeepEdit++, we build upon DeepEdit's interactive
segmentation application and make some modifications to the data pre-processing pipeline. Our modifications to the interactive segmentation application improve interaction efficacy as measured by improvements in the segmentation performance. Meanwhile, our modifications to the data pre-processing pipeline introduce some adaptability to imaging modality differences in the medical imaging domain. This is outlined by substantially improved segmentation performance in previously unsupported modalities.
People |
ORCID iD |
| Parhom Esmaeili (Student) |
Studentship Projects
| Project Reference | Relationship | Related To | Start | End | Student Name |
|---|---|---|---|---|---|
| EP/Y528572/1 | 30/09/2023 | 29/09/2028 | |||
| 2888851 | Studentship | EP/Y528572/1 | 30/09/2023 | 13/10/2027 | Parhom Esmaeili |