Machine learning-based vision for "green-on-green'' spraying

Lead Research Organisation: University of Lincoln
Department Name: School of Computer Science

Abstract

The goal of intelligent spraying is that herbicides are more precisely targeted. This reduces waste and is beneficial for the environment. A key step in such spraying is identifying weeds. Typical approaches to doing this use computer vision, typically methods based around the use of machine learning, operating on pictures taken from cameras that view weeds and crops from above.

Current vision technology has proved able to handle "green-on-brown'' scenarios, producing good accuracy of detection of weeds where weeds and crops are easy to spot against very distinctly coloured backgrounds, such as soil. This is sufficient in the early stages of growth, when crops and weeds are small. However, in later stages of growth, and the canopies of crops and weeds begin to overlap, accurately and efficiently detecting weeds becomes much harder. This "green-on-green'' scenario is currently beyond what can feasibly be handled. Solving the "green-on-green'' weed detection problem is the focus of this PhD.

The reason that "green-on-green'' is hard, is that we cannot rely on simple colour segmentation. In the "green-on-brown'', segmenting images into green and brown areas identifies green areas with distinctive shapes that can easily be classified. This is what is going on in existing detectors whether they are based on classical machine vision or more modern deep learning approaches. When plants overlap, the green regions no longer contain such distinctive shapes, or such large areas of distinctive shapes, and existing approaches to detection struggle as a result. The answer is to build detectors that look for things other than just colour. This PhD will pursue two lines of inquiry: adding additional dimensions to the image data, and building detectors that focus on different features, in the framework of deep learning-based vision.

1) Adding dimensions. Conventional imaging uses RGB cameras which report intensity in 3 wide visual bands (the familiar red, green and blue bands). We will investigate the use of additional layers of data. One possibility that we will look at is the use of depth data generated by RGB-D cameras. These cameras report the distance to the object in each pixel, giving, in effect, information about the contours of the canopy. We will also look at the use of multispectral cameras, which provide an additional spectral band, the "red edge''. This band is between the usual red band of RGB cameras and near infra-red, and is a region in which the reflectance (what is measured by a camera) of vegetation changes a lot, meaning that the "red edge'' band tends to be very informative.

2) Focusing on different features. Recent research in machine learning-based vision has begun to focus on detectors that are, in effect, ensembles where each member looks for different elements in an image. A typical example is to use ensemble members that are trained on objects of different size, that generates detectors that are somewhat scale invariant. We will focus on identifying features that are relevant for weed detection training detectors to identify just these features, and then combining these detectors.

Accuracy of detection, while important, is not the only consideration. The methods we develop also need to run fast enough that they can be deployed on a sprayer moving at 15km/h. In general, when using machine learning-based approaches, there is a tradeoff between the accuracy that can be obtained, and the speed with which images can be processed. This tradeoff can be managed along a number of dimensions, such as image size, depth of image processing backbone, and the complexity of the detection head. We will examine this tradeoff for the various approaches that we consider. In doing this, we will take into account various options for processing that are available as part of the setup at Riseholme, processing on-board the sprayer, edge-processing using the 5G setup, and cloud processing, and other options such as FGPA.

Planned Impact

The proposed CDT provides a unique vision of advanced RAS technologies embedded throughout the food supply chain, training the next generation of specialists and leaders in agri-food robotics and providing the underpinning research for the next generation of food production systems. These systems in turn will support the sustainable intensification of food production, the national agri-food industry, the environment, food quality and health.

RAS technologies are transforming global industries, creating new business opportunities and driving productivity across multiple sectors. The Agri-Food sector is the largest manufacturing sector of the UK and global economy. The UK food chain has a GVA of £108bn and employs 3.6m people. It is fundamentally challenged by global population growth, demographic changes, political pressures affecting migration and environmental impacts. In addition, agriculture has the lowest productivity of all industrial sectors (ONS, 2017). However, many RAS technologies are in their infancy - developing them within the agri-food sector will deliver impact but also provide a challenging environment that will significantly push the state of art in the underpinning RAS science. Although the opportunity for RAS is widely acknowledged, a shortage of trained engineers and specialists has limited the delivery of impact. This directly addresses this need and will produce the largest global cohort of RAS specialists in Agri-Food.

The impacts are multiple and include;

1) Impact on RAS technology. The Agri-Food sector provides an ideal test bed to develop multiple technologies that will have application in many industrial sectors and research domains. These include new approaches to autonomy and navigation in field environments; complex picking, grasping and manipulation; and novel applications of machine learning and AI in critical and essential sectors of the world economy.

2) Economic Impact. In the UK alone the Made Smarter Review (2017) estimates that automation and RAS will create £183bn of GVA over the next decade, £58bn of which from increased technology exports and reshoring of manufacturing. Expected impacts within Agri-Food are demonstrated by the £3.0M of industry support including the world largest agricultural engineering company (John Deere), the multinational Syngenta, one of the world's largest robotics manufacturers (ABB), the UK's largest farming company owned by James Dyson (one of the largest private investors in robotics), the UK's largest salads and fruit producer plus multiple SME RAS companies. These partners recognise the potential and need for RAS (see NFU and IAgrE Letters of Support).

3) Societal impact. Following the EU referendum, there is significant uncertainty that seasonal labour employed in the sector will be available going forwards, while the demographics of an aging population further limits the supply of manual labour. We see robotic automation as a means of performing onerous and difficult jobs in adverse environments, while advancing the UK skills base, enabling human jobs to move up the value chain and attracting skilled workers and graduates to Agri-Food.

4) Diversity impact. Gender under-representation is also a concern across the computer science, engineering and technology sectors, with only 15% of undergraduates being female. Through engagement with the EPSRC ASPIRE (Advanced Strategic Platform for Inclusive Research Environments) programme, AgriFoRwArdS will become an exemplar CDT with an EDI impact framework that is transferable to other CDTs.

5) Environmental Impact. The Agri-food sector uses 13% of UK carbon emissions and 70% of fresh water, while diffuse pollution from fertilisers and pesticides creates environmental damage. RAS technology, such as robotic weeders and field robots with advanced sensors, will enable a paradigm shift in precision agriculture that will sustainably intensify production while minimising environmental impacts.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/S023917/1 01/04/2019 30/09/2031
2457969 Studentship EP/S023917/1 01/10/2020 30/04/2025 David Churchill