BEWARE: Behaviour based Enhancement of Wide-Area Situational Awareness in a Distributed Network of CCTV Cameras

Lead Research Organisation: Queen Mary University of London
Department Name: Computer Science

Abstract

There are now large networks of CCTV cameras collecting colossal amounts of video data, of which many deploy not only fixed but also mobile cameras on wireless connections with an increasing number of the cameras being either PTZ controllable or embedded smart cameras. A multi-camera system has the potential for gaining better viewpoints resulting in both improved imaging quality and more relevant details being captured. However, more is not necessarily better. Such a system can also cause overflow of information and confusion if data content is not analysed in real-time to give the correct camera selection and capturing decision. Moreover, current PTZ cameras are mostly controlled manually by operators based on ad hoc criteria. There is an urgent need for the development of automated systems to monitor behaviours of people cooperatively across a distributed network of cameras and making on-the-fly decisions for more effective content selection in data capturing. Todate, there is no system capable of performing such tasks and fundamental problems need to be tackled. This project will develop novel techniques for video-based people tagging (consistent labelling) and behaviour monitoring across a distributed network of CCTV cameras for the enhancement of global situational awareness in a wide area. More specifically, we will focus on developing three critical underpinning capabilities:(a) To develop a model for robust detection and tagging of people over wide areas of different physical sites captured by a distributed network of cameras, e.g. monitoring the activities of a person travelling through a city/cities.(b) To develop a model for global situational awareness enhancement via correlating behaviours across a network of cameras located at different physical sites, and for real-time detection of abnormal behaviours in public space across camera views; The model must be able to cope with changes in visual context and on definitions of abnormality, e.g. what is abnormal needs be modelled by the time of the day, locations, and scene context.(c) To develop a model for automatic selection and controlling of Pan-Tilt-Zoom (PTZ)/embedded smart cameras (including wireless ones) in a surveillance network to 'zoom into' people based on behaviour analysis using a global situational awareness model therefore achieving active sampling of higher quality visual evidence on the fly in a global context, e.g. when a car enters a restricted zone which has also been spotted stopping unusually elsewhere, the optimally situated PTZ/embedded smart camera is to be activated to perform adaptive image content selection and capturing of higher resolution imagery of, e.g. the face of the driver.

Publications

10 25 50

publication icon
Fu Y (2014) Learning multimodal latent attributes. in IEEE transactions on pattern analysis and machine intelligence

publication icon
Gong S (2014) Person Re-Identification

publication icon
Hospedales T (2013) Finding Rare Classes: Active Learning with Generative and Discriminative Models in IEEE Transactions on Knowledge and Data Engineering

publication icon
Hospedales T (2011) Video Behaviour Mining Using a Dynamic Topic Model in International Journal of Computer Vision

 
Description Developed mathematical models and computer systems for automatic person re-identification in public spaces over distributed networks of cameras, global situational correlation of human behaviours observed across a camera network, and abnormal behaviour recognition in crowded public spaces.



Selected Publications:



S. Gong and T. Xiang. Visual Analysis of Behaviour: From Pixels to Semantics, 376 pages, Springer, May 2011.



W. Zheng, S. Gong and T. Xiang. Re-identification by Relative Distance Comparison. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 35, No. 3, pp. 653-668, March 2013.



J. Li, S. Gong and T. Xiang. Learning Behavioural Context. International Journal of Computer Vision, Vol. 97, No. 3, pp. 276-304, May 2012.



T. Hospedales, J. Li, S. Gong and T. Xiang. Identifying Rare and Subtle Behaviours: A Weakly Supervised Joint Topic Model. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 33, No. 12, pp. 2451-2464, December 2011.



C.C. Loy, T. Xiang and S. Gong. Time-Delayed Correlation Analysis for Multi-Camera Activity Understanding. International Journal of Computer Vision, Vol. 90, No. 1, pp. 106-129, October 2010.
Exploitation Route Public security and safety; Infrastructure protection University spin-out company
Sectors Digital/Communication/Information Technologies (including Software),Security and Diplomacy,Transport

URL http://www.eecs.qmul.ac.uk/~sgg/BEWARE/
 
Description Spin-out company Vision Semantics; DSTL and MOD development contracts; US DOD development contracts; Patents licensing;
Sector Digital/Communication/Information Technologies (including Software),Security and Diplomacy
Impact Types Economic

 
Description BAE Systems
Amount £412,000 (GBP)
Organisation BAE Systems 
Sector Academic/University
Country United Kingdom
Start 06/2010 
End 05/2013
 
Description British Airports Authority (BAA)
Amount £12,000 (GBP)
Organisation Heathrow Airport Holdings 
Sector Private
Country United Kingdom
Start 01/2011 
End 11/2011
 
Description DSTL
Amount £108,000 (GBP)
Organisation Defence Science & Technology Laboratory (DSTL) 
Sector Public
Country United Kingdom
Start 07/2011 
End 03/2012
 
Description EU FP7 Security Programme
Amount € 413,000 (EUR)
Organisation European Commission 
Department Seventh Framework Programme (FP7)
Sector Public
Country European Union (EU)
Start 03/2014 
End 05/2016
 
Description EU FP7 Security Programme
Amount € 546,000 (EUR)
Organisation European Commission 
Department Seventh Framework Programme (FP7)
Sector Public
Country European Union (EU)
Start 01/2014 
End 07/2017
 
Description MOD CDE
Amount £12,000 (GBP)
Organisation Defence Science & Technology Laboratory (DSTL) 
Department Centre for Defence Enterprise
Sector Public
Country United Kingdom
Start 01/2012 
End 04/2012
 
Description Ministry of Defence
Amount £120,000 (GBP)
Organisation Ministry of Defence (MOD) 
Sector Public
Country United Kingdom
Start 10/2011 
End 09/2015
 
Description Ministry of Defence
Amount £69,500 (GBP)
Organisation Ministry of Defence (MOD) 
Sector Public
Country United Kingdom
Start 09/2011 
End 03/2015
 
Description Royal Society Newton Advanced Fellowship
Amount £111,000 (GBP)
Organisation The Royal Society 
Sector Charity/Non Profit
Country United Kingdom
Start 03/2016 
End 02/2019
 
Description US Army Research Lab
Amount $120,000 (USD)
Organisation US Army Research Lab 
Sector Public
Country United States
Start 09/2008 
End 12/2010
 
Company Name Vision Semantics Ltd 
Description Vision Semantics Ltd is a spin-out company of Queen Mary University of London, developing self-configuring video analysis and dynamic scene understanding tools & applications that employ innovative learning and statistical methods. 
Year Established 2007 
Impact Five patents granted and pending, a joint venture start-up in the Far East (2012), a world-wide licensing for setting up another start-up (2014).