Exploring the modelling of behaviour and context using deep learning under constrained computing platforms with applications to Digital Health

Lead Research Organisation: University of Oxford
Department Name: Computer Science

Abstract

This project falls within the ESPRC Artificial Intelligence Technologies research area.

The central question of the study is the effective modelling of consumer health to determine user behaviour and context using data-mining methods.
Today's forms of mobile sensing, ranging from phone apps to wearable devices, typically monitor relatively simple dimensions of behaviour and context; for instance, sleep duration and step counts. However, advances in areas like deep learning are demonstrating computational models are possible for much more complex phenomena (e.g., user emotion, social interactions), at a level of robustness that they can be useful in real-world environments. Simultaneously, advances in the computational power of constrained devices (e.g., low-power GPUs, small-form-factor hardware accelerators) are increasing the sophistication of algorithms that are feasible to execute on these platforms.

Data mining has widespread applications as a useful process for extracting meaningful information from large datasets. In particular, its application in the modelling of health data on mobile devices has generated considerable interest. Such interest is chiefly motivated by breakthroughs in both software and hardware, namely deep learning methods and device computational power.

This research will involve an examination of current models and a subsequent software innovation to produce efficient models suited for constrained computing platforms. Current usage of data mining models often involves a trade-off between performance and efficiency. A prudent research question would be to tackle algorithmic redundancies and innovate for methods with relevance to constrained computing platforms such as wearables.

The main objectives to be achieved through this project include, but are not limited to, the following:
- modelling sensor data from constrained platforms using deep learning principles and
algorithms such that the interpretation of user behaviour and context reaches greater breadth and accuracy;
- developing new system resource-efficient deep learning methods suited to
constrained computing platforms (such as wearable devices and embedded platforms);
- investigating potential efficiency gains in deep learning methods through
software/algorithmic innovation or novel hardware/processor directions.

The novelty of the research lies on the potential solutions that might result from experimenting with varying machine learning architectures.

Finally, this project also aligns to EPSRC's strategy in delivering intelligent technologies and systems. The project also adheres to broader Cross-ICT priorities since it seeks to look at real healthcare data, such that the project is ICT-centric but not necessarily solely related to ICT.

People

ORCID iD

Eu Tong (Student)

Publications

10 25 50
publication icon
Kwon H (2020) IMUTube Automatic Extraction of Virtual on-body Accelerometry from Video for Human Activity Recognition in Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies

publication icon
Radu V (2018) Multimodal Deep Learning for Activity and Context Recognition in Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies

publication icon
Tong C (2019) Tracking Fatigue and Health State in Multiple Sclerosis Patients Using Connnected Wellness Devices in Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/N509711/1 01/10/2016 30/09/2021
1892895 Studentship EP/N509711/1 01/10/2017 29/09/2021 Eu Tong
 
Description As a result of work funded through this award, I have produced 4 referred conference papers, 1 referred workshop paper and 1 referred poster (please refer to the list of publications). In Kwon and Tong et al (2020), we proposed a novel pipeline that converts video data into virtual accelerometry data as a solution to the data scarcity problem in sensor-based human activity recognition. In Tong et al (2020), we investigate whether accelerometers, and by extension other inertial sensors, are still appropriate for activity recognition, given the rise of imagers (small embedded image sensors). In Tong et al (2019), we study the use of machine learning to model Multiple Sclerosis patients' health states and symptoms using data from connected wellness devices at home. In Tong et al (2018), we presented a study into the use of machine learning to model the Big-Five Personality of users using a Large-scale Networked Mobile and Appliance Data. Radu et al (2017) investigate the use of deep learning to model multimodal data for activity and context recognition; Tseng et al (2018) propose binary filters for convolutional neural networks as a means to reduce memory and computation requirements in deploying deep learning.

Publications:
(* signifies equal first-authorship)

Conference Papers
Kwon, H.*, Tong, C.*, Haresamudram, H., Gao, Y., Abowd, G. D., Lane, N. D., & Ploetz, T. (2020). IMUTube: Automatic extraction of virtual on-body accelerometry from video for human activity recognition. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 4(3), 1-29.

Tong, C., Craner, M., Vegreville, M., & Lane, N. D. (2019). Tracking Fatigue and Health State in Multiple Sclerosis Patients Using Connnected Wellness Devices. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 3(3), 1-19.

Tseng, V. S., Bhattachara, S., Fernández-Marqués, J., Alizadeh, M., Tong, C., & Lane, N. D. (2018, July). Deterministic binary filters for convolutional neural networks. International Joint Conferences on Artificial Intelligence Organization.

Radu, V., Tong, C., Bhattacharya, S., Lane, N. D., Mascolo, C., Marina, M. K., & Kawsar, F. (2018). Multimodal deep learning for activity and context recognition. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 1(4), 1-27.

Workshop Papers
Tong, C., Tailor, S. A., & Lane, N. D. (2020, March). Are Accelerometers for Activity Recognition a Dead-end?. In Proceedings of the 21st International Workshop on Mobile Computing Systems and Applications (pp. 39-44).

Posters
Tong, C., Harari, G. M., Chieh, A., Bellahsen, O., Vegreville, M., Roitmann, E., & Lane, N. D. (2018, June). Inference of Big-Five Personality Using Large-scale Networked Mobile and Appliance Data. In Proceedings of the 16th Annual International Conference on Mobile Systems, Applications, and Services (pp. 530-530).
Exploitation Route Our work on the generation of virtual accelerometry data from videos offer a solution to the long-held problem of data scarcity and expensive data collection of sensor data, which may prompt a significantly wider range of activity classes and population demographics to be included in sensor-based activity recognition problems in the future. Our studies on using machine learning on connected wellness devices data offer a practical perspective of the model performance on such domains and might prompt future research into further utilizing data generated from these devices (beyond smartphone and smartwatches) for healthcare research. Our work investigating imager-based mobile systems for human activity recognition proposes a new generation of embedded sensors, which might be taken forward by efforts to manufacture mage sensors which are ever smaller; it might also be put to use by researchers interested in automatic detection of Activities of Daily Living (ADL).
Sectors Digital/Communication/Information Technologies (including Software),Healthcare,Leisure Activities, including Sports, Recreation and Tourism

 
Description BDI wearable camera dataset 
Organisation University of Oxford
Department Big Data Institute
Country United Kingdom 
Sector Academic/University 
PI Contribution I am contributing towards building an activity recognition model using data collected by participants in a wearable camera study conducted by the BDI (CAPTURE-24). My current contribution includes building a preprocessing pipeline for the raw images.
Collaborator Contribution They have conducted the study in 2015 where they recruited participants to use wearable cameras to record their activities. They have compiled a dataset with wearable camera images as well as their activity annotations.
Impact Null.
Start Year 2019
 
Description IMUTube Project with Georgia Tech 
Organisation Georgia Institute of Technology
Country United States 
Sector Academic/University 
PI Contribution We contributed to the design of the IMUTube pipeline which converts video data to virtual accelerometery data, the design and execution of experiments to robustly test the capabilities and limitations of the pipeline, and the writing of the resultant research paper.
Collaborator Contribution Our partners contributed to the design of the IMUTube pipeline which converts video data to virtual accelerometery data, the design and execution of experiments to robustly test the capabilities and limitations of the pipeline, and the writing of the resultant research paper.
Impact This collaboration resulted in an IMWUT publication "IMUTube: Automatic extraction of virtual on-body accelerometry from video for human activity recognition" (https://doi.org/10.1145/3411841).
Start Year 2020