Facial Deformable Models of Animals

Lead Research Organisation: University of Nottingham
Department Name: School of Computer Science

Abstract

Although the automatic monitoring of animals and their behaviour is of great importance to the field of animal health and welfare, developing computational tools for this purpose has received little attention by the scientific community. Aside the emotional value that they may have to people, animals are also important to the society and the economy, and developing such tools will be a big, transformative step with direct impact on all these areas. Towards this end, F.D.M.A. will develop novel tools for detecting and tracking animal facial behaviour, and in particular, for learning and fitting facial deformable models of animals to unconstrained images/video. Although algorithms for detecting and tracking of human faces have been recently shown capable of coping to some extent with unseen variations (e.g. pose, expression, illumination, background and occlusion), there is much more variability in the face of animals that the current solutions have not yet addressed. F.D.M.A. sets out to challenge the current state-of-the-art methods in face alignment and tracking, and develop learning and fitting algorithms that can deal with very large shape and appearance variations, typically encountered in animal faces. To the best of our knowledge, this problem has never been explored in the past by the Computer Vision community. It is significantly more difficult and different than prior work on human faces, as animal faces exhibit a much larger degree of variability in shape and appearance as well as in pose and expression.

The tools to be developed by F.D.M.A. will enable the automatic analysis and understanding of animal facial behaviour which is of growing importance to animal health and welfare. The potential benefits of enhanced animal health and welfare are great; for animals, their owners, society, public health and the economy. Cats and dogs, the two species chosen by F.D.M.A., are the most popular companion animals, worldwide and of enormous societal and economic importance. To the best of our knowledge, there is no prior work in computer vision on detecting and tracking the facial deformable shape and motion of animals in images and videos. F.D.M.A. sets out to develop such tools that will enable automatic facial animal behaviour understanding. Aside animal health and welfare, the Computer Vision tools to be developed by F.D.M.A. can be used to facilitate research in other scientific disciplines, such as Animal Behaviour, Vision and Robotics.

Planned Impact

The proposed research finds application in promoting animal welfare and health and opens up new directions in basic research for example in animal behaviour, vision, and robotics.

Animal health and welfare. Changes in behaviour are often an early sign of potential welfare problems and it has been recognised since the time of Darwin that the face is a focus for the expression of emotions in animals and man. Thus the face provides a single potential focus for identifying a range of states of concern, such as pain, fear, frustration etc. However, a major challenge to exploiting the potential of focusing on the face is the ability to address the challenges outlined in this proposal. If F.D.M.A. can be developed then this opens up an entirely new way to assess animal welfare by computers.

Cats and dogs. F.D.M.A. will focus on dog and cats that are arguably the most popular companion animals in the UK and worldwide. Aside the emotional value that companion animals have to people, there is also an associated £ multi-Billion pet industry.

Animal behaviour. Developing computer vision tools for detecting and tracking animal facial behaviour opens up tremendous potential to objectively measure indicators of animal emotion that before now have been subject to great subjectivity. As a proof of concept, the tools to be developed by F.D.M.A. will be applied to the problem of detecting pain in cats from videos.

Vision. A common practice in research on the primate visual system is to have the head of the animal (e.g. macaques) fixated to make it focus on the stimuli and allow for recording. This setting causes discomfort to the animal. F.D.M.A. aims to develop tools for the automatic detection and tracking of animal facial behaviour (including gaze), and hence will assist research on primate vision. Dissemination to the Vision community will be achieved through our link with the Pack Lab, a world-leading group on Visual Neurophysiology at McGill University

Robotics. A significant portion of robotics research has concentrated on human-robot interaction and, in particular, on developing robots capable of understanding human behaviour. However, little research has been conducted on how the robot can interact with animals in domestic and working environments as well as collaborate with them to assist their owners (e.g. elderly, disabled people). Dissemination to the Robotics community will be achieved through the Lincoln Centre for Autonomous Systems, and in particular through contacts provided by Dr. Marc Hanheide and Dr. Nicola Bellotto who are experts on human-robot interaction.

Commercial exploitation. We believe that the technology developed in this project has high potential for commercialization. There are a number of face detection and tracking technologies that have recently found their ways to the market, many of them developed here in the UK who is world-leader in computer vision research. (e.g. Image Metrics, Vision Metric, Aurora, OmiPerception). However, to the best of our knowledge, there is no company that offers face detection and tracking technologies for animal faces. With this in mind, we will exploit opportunities for commercialisation of the developed technology. To ensure the potential for commercial exploitation, we will protect the developed IP where appropriate. To this end, we will seek advice from Enterprise@Lincoln.
 
Description Face alignment is the process of localising a set of facial landmarks such as the tip of the nose, the corner of the eyes, the pupils etc. Face alignment is a key component for facial recognition technologies such as face identification, facial expression analysis (e.g. happy, surprise) and facial behaviour analysis (e.g. pain). While a large amount of work has been devoted to human face alignment, there has been very little work on animal face alignment, a problem which is significantly more challenging due to the large appearance variations of animal faces (as opposed to human faces).

In this project we explored a number of techniques for face alignment under large appearance variations typically encountered in animal faces, and identified Deep Neural Networks as the most prominent and impactful approach for solving the problem. We were the first to apply a family of state-of-the-art techniques from Deep Learning for solving this problem in 2016 and one of our papers won the first prize in the 3D Face Alignment in the Wild Challenge. We have continued our work in this direction in 2017 being able to publish 3 related papers at ICCV 2017 which is one of the two premium conferences in Computer Vision research. One of these papers is already considered the most comprehensive work in face alignment using Deep Learning. The code we provide on Github has been downloaded by thousands of researchers (starred more than 1,000 times). One of the other papers is the very first one which explores how these methods can be run on devices with limited computational resources (e.g. raspberry pie) and for its contributions was selected as an oral presentation at ICCV (45 our of 2143 submissions).

In 2020 we also managed to publish at CVPR 2020 and release the largest dataset with Animal Faces to date. The dataset is one of the major outcomes of this project and is made available at https://fdmaproject.wordpress.com/. We managed to collect this dataset thanks to the Zooniverse platform: https://www.zooniverse.org/
Exploitation Route We have been collaborating with Prof. Daniel Mills to apply this technology for the detection of visible signs of feline pain. More recently, we established collaboration with Dr. Matthew Bell to further pursue extensions of the developed technologies for monitoring livestock. To this end, we were able to secure funding for 2 scholarships. Also, our results and techniques are general enough to be applied to other domains and structured objects beyond faces: our methods have produced state-of-the-art results for the problem of localising the joints of the human body. We will be looking opportunities to apply our methods to the medical domain, too.
Sectors Agriculture, Food and Drink,Creative Economy,Digital/Communication/Information Technologies (including Software),Healthcare

URL https://fdmaproject.wordpress.com
 
Description UCPPROJECT
Amount £55,000 (GBP)
Organisation Feline Friends 
Sector Charity/Non Profit
Country United Kingdom
Start 10/2015 
End 09/2016
 
Description Monitoring livestock using Deep Learning 
Organisation University of Nottingham
Department School of Biosciences
Country United Kingdom 
Sector Academic/University 
PI Contribution This is a collaboration I established with Dr. Matthew Bell who is an Assistant Professor in the School of Biosciences, at the University of Nottingham. Dr. Bell is very keen to apply state-of-the-art computer vision techniques for monitoring livestock animals, and my team and I will be providing the related expertise in Computer Vision. So far, we have managed to secure funding for 2 PhD scholarships: 1 from Bomford Trust and the University of Nottingham f (50% - 50%) and 1 from EPSRC Industrial Strategy. Both scholarships will be on applying Deep Learning techniques for detailed understanding of animal (cow) behaviour from video stream.
Collaborator Contribution Dr. Bell's research work investigates agricultural systems and sustainable food production. Within the context of our collaboration Dr. Bell provides the necessary expertise in Animal Behaviour analysis and coordinates the data collection and annotation process, including setting up and maintaining the camera system for animal monitoring at the Sutton Bennington Campus (University of Nottingham)
Impact This collaboration is truly multi-disciplinary with me providing expertise in Computer Vision and Dr. Bell in Animal Behaviour.
Start Year 2017
 
Description Prof. Daniel Mills 
Organisation University of Lincoln
Department School of Life Sciences
Country United Kingdom 
Sector Academic/University 
PI Contribution I collaborate with Prof. Mills on UCCProject, which is related to EP/M02153X/1 "Facial Deformable Models on Animals".
Collaborator Contribution Prof. Mills is an expert on Animal Behaviour and an advisor for EP/M02153X/1 "Facial Deformable Models on Animals".
Impact Work in progress; no outputs to report yet. This is a multi disciplinary collaboration: Prof. Mills is an expert on Animal Behaviour and I work on Computer Vision.
Start Year 2014