📣 Help Shape the Future of UKRI's Gateway to Research (GtR)

We're improving UKRI's Gateway to Research and are seeking your input! If you would be interested in being interviewed about the improvements we're making and to have your say about how we can make GtR more user-friendly, impactful, and effective for the Research and Innovation community, please email gateway@ukri.org.

Pig ID: developing a deep learning machine vision system to track pigs using individual biometrics

Lead Research Organisation: University of the West of England
Department Name: Bristol Robotics Laboratory

Abstract

Abstracts are not currently available in GtR for all funded research. This is normally because the abstract was not required at the time of proposal submission, but may be because it included sensitive information such as personal details.

Technical Summary

We aim to develop a new technological tool using deep learning machine vision to biometrically identify and track individual pigs. Facilitating behaviour monitoring for health and welfare, it enables a step-change in 'precision livestock farming' and has strong industry support.

Training CNNs and validating their performance requires time-consuming manual labelling of ground-truth images. Obj 1 produces this manual data increasing data volume by: a) Using a semi-supervised approach with a modified Mask-RCNN for instance segmentation, and b) Developing a new approach for identity validation: simultaneous recording with a visible light video camera paired with a modified UV-detecting camera, pigs are individually marked with invisible sun-cream which appears black under UV. Our labelled, paired videos will be a valuable public resource for other researchers.
Commercial growing pigs are not ID marked. Building on our existing machine vision technology which identifies ID from pig faces, a convolutional neural network (CNN) will be trained to recognise biometrics of individual pigs from above in a group (Obj 2; we have shown feasibility in previous work).
Next (Obj 3), we use a refined Mask-RCNN to extract individual pigs from their surroundings and implement our trained CNN enabling spatial tracking. Our innovative 'tracking by recognition' eliminates the problem of lost ID and track-swaps due to close-proximity or occlusion, by re-establishing biometric ID. We retrain the CNN to achieve long-term tracking over weeks as pigs grow and change appearance.
We will develop 'open set recognition' (Obj 4), avoiding extensive manual re-training for each new group. Previous work showed that a latent feature space can be optimised to cluster images that appear similar together (because they are from the same pig), separating them from other pigs.
Obj 5 integrates 3 and 4 refining a state-of-the-art system to enrol and track individual pigs over weeks as they grow.

Publications

10 25 50
 
Description Interview for Reuters 
Form Of Engagement Activity A broadcast e.g. TV/radio/film/podcast (other than news/press)
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Media (as a channel to the public)
Results and Impact Melvyn Smith undertook a recorded video interview for Reuters concerning us of AI in animal barometric recognition and emotion detection.
Year(s) Of Engagement Activity 2025
URL https://www.reuters.com/video/technology/