📣 Help Shape the Future of UKRI's Gateway to Research (GtR)

We're improving UKRI's Gateway to Research and are seeking your input! If you would be interested in being interviewed about the improvements we're making and to have your say about how we can make GtR more user-friendly, impactful, and effective for the Research and Innovation community, please email gateway@ukri.org.

A Human-Trustable Self-Improving Machine Learning Framework for Rapid Disaster Responses Using Satellite Sensor Imagery

Lead Research Organisation: University of Surrey
Department Name: Computing Science

Abstract

Due to the abrupt changes in Earth's climate and the dramatic global rise of urbanisation, natural disasters have become unpredictable and caused great social and economic devastation in recent years. According to one published study, between 2015-2019, there were a total of 1624 reported natural disasters, such as earthquakes, floods, landslides, etc., killing on average 60,000 people each year globally. Although humans cannot prevent natural disasters in most cases, timely responses can play a critical role in disaster relief and life-saving. Rapid and accurate building damage assessment (BDA) is required in humanitarian assistance and disaster response to carry out life-saving efforts. However, current BDAs are mostly based on manual inspection and documentation, which is time consuming and labour-intensive.

Although high-resolution satellite sensor images (HRSSIs) such as GeoEye-1, WorldView-2 and 3, have become the major source of first-hand information for BDA, those images often present a mosaic of complex geometrical structures and spatial patterns. Automatic information extraction from HRSSIs of disaster-affected areas is imperative under time-critical situations, and has the potential to facilitate post-disaster assessment, speed up the life-saving rescue processes. However, this remains an extremely challenging task for the state-of-the-art machine learning (ML) algorithms. In practice, human experts have to manually interpret and examine the captured HRSSIs, which involves significant time and labour.

Conventional ML-based BDA methods leverage mainstream classifiers, such as support vector machine, random forest, to generate a damage map based on hand-crafted features extracted from pre- and post- disaster images. However, the complexity and heterogeneity of HRSSIs hinder the applicability of conventional methods, making feature extraction extremely difficult. Besides, buildings often involve only a few pixels, leaving minimal structural information to exploit. Although conventional methods do not require a large volume of training images and are more interpretable, they fail easily on real scenes. On the other hand, deep learning techniques, particularly, deep convolutional neural networks (DCNNs) have reported significant achievements in the field of computer vision and pattern recognition. Some recent studies have explored the capability of DCNNs on BDA and reported promising outcomes under experimental conditions. DCNN-based methods have become increasingly popular and are currently the state-of-the-art in BDA research. However, DCNNs are often characterised as black boxes, and are computationally intensive and data-hungry. As the underlying mechanisms are different from humans and not understandable, DCNNs can fail easily in unfamiliar scenarios due to uncertainties and are often observed to exhibit unexpected behaviours. These disadvantages hinder the practical utilities of DCNN-based BDA methods in real-world scenarios. As a result, emergency management services (EMSs), e.g., the International Charter Space and Major Disasters, still rely on visual interpretation of HRSSIs to assess building damage due to the reliability.

To make ML-based BDA methods reliable for real-world scenarios, this project aims to catalyse a step-change in artificial intelligence by developing highly innovative explainable ML (XML) techniques to automate the BDA processes based on post-disaster HRSSIs. The developed XML techniques will act as a framework for scene understanding, building segmentation and damage assessment on both scene-level and pixel-level in a joint fashion, and have the capacity to self-adapt to different application scenarios in real-time to address real-world uncertainties. By achieving a reliable automated solution to facilitate the highly challenging post-disaster BDA task, we ultimately aim to assist EMSs for faster post-disaster assessment, facilitating life-saving process.

Publications

10 25 50
 
Title An Entropy-Adaptive Fully Convolutional Neural Network with Multi-Scale Multi-Level Information Fusion for Building Segmentation 
Description This model is designed for fully automated building segmentation, enabling precise localization and contour extraction of buildings from very high-resolution satellite senor imagery. Due to the highly complex geometrical structures and spatial patterns present in satellite sensor images, building segmentation remains a challenging task requiring further research. Existing benchmark datasets for building segmentation vary in resolution, making it difficult for a single neural network model to perform consistently across different datasets. To address this challenge, the proposed segmentation model incorporates a novel entropy-adaptive module that categorizes each image into low-, medium-, and high-entropy regions. It then applies different convolutional kernels accordingly to extract coarse, medium, and fine-grained information. Additionally, multi-scale multi-level information fusion techniques are employed to maximize the information extracted from images. The proposed model outperforms state-of-the-art approaches across multiple benchmark datasets for building segmentation. The project team is currently preparing a research paper to report this method. 
Type Of Material Computer model/algorithm 
Year Produced 2025 
Provided To Others? No  
Impact This model serves as a one-stop, fully automated tool for precise building segmentation from remote sensing imagery. Its high accuracy and adaptability make it particularly valuable for post-disaster building damage assessment, enabling rapid response and recovery efforts. Additionally, it supports urban planning by providing detailed building footprint data and enhances fire monitoring by accurately identifying structures at risk. The model will be released as open source software, benefiting researchers, policymakers, and disaster response teams worldwide. 
 
Title Learning Multi-View Supervised and Unsupervised Representations with Transparent Fuzzy Autoencoders for Remote Sensing Scene Classification 
Description The core of this ensemble framework is the transparent fuzzy autoencoder, which offers a highly interpretable approach to learning compressed and more discriminative representations from high-dimensional data. The proposed framework first utilizes multiple commercially available deep neural networks pretrained on ImageNet to transform remote sensing images into high-dimensional numerical representations, capturing multiple views of each image. Next, it trains multiple fuzzy autoencoders to compress and refine these multi-view representations into more descriptive, low-dimensional embeddings in both supervised and unsupervised settings. By fusing these refined representations, the ensemble framework significantly enhances classification accuracy, achieving state-of-the-art performance across a wide range of benchmark datasets. The project team is currently preparing a research paper on this ensemble framework. The code will be released as open-source software following the paper's publication. 
Type Of Material Computer model/algorithm 
Year Produced 2025 
Provided To Others? No  
Impact The ensemble framework is a fully automated, one-stop solution for precise land-use classification from remote sensing imagery. Its high accuracy and adaptability make it particularly valuable for building localization, urban planning, fire monitoring, and nature preservation, enabling more efficient decision-making in disaster response and environmental management.