Monitoring coastal environments using imaging sonars and machine learning

Lead Research Organisation: University of East Anglia
Department Name: Environmental Sciences

Abstract

Imaging sonars are now capable of producing video like images at frame rates (typically 8-30 f/ps) in the underwater marine environment. Such systems work well in the turbid coastal and estuarine environments where low light video systems do not provide useful imagery and as such imaging sonars provide new remote sensing tools for studying previously intractable problems of importance to industry and to marine managers including detection of potential clogging organisms for power station water intakes and fish behaviour around coastal structures.
However, the amount of data that such systems can generate (Tb/day) creates a real barrier to their routine deployment due to the staff requirement to analyse images and the associated costs and delays. Recently, there have been advances in the capability of machine vision modules making them now practical components of underwater remote sensing systems which typically have severe constraints on the available power and bandwidth of communications links to data processing locations onshore. This project aims to develop automated machine learning to detect and classify targets of interest in near real time thereby dramatically reducing the image analysis costs and opening up the use of such systems in autonomous remote sensing applications.

Traditional image processing techniques employed to detect and classify imaging sonar features use pixel-based supervised classification. However, these are ineffective in scenarios where large quantities of data are available - increasing costs and causing delays in data production. Moreover, despite the large volume of the data, the sonar footage may contain few occurrences of relevant objects for long periods of time. This said, while the imaging sonar footage is expensive to capture and later annotate, the appearance of objects e.g. fish, jelly fish etc. often bears resemblance to that acquired using the traditional RGB cameras. Consequently, the research will concentrate on developing machine learning algorithms capable of aiding processing of the sonar images (target domain) by letting them learn other imaging domains (source domains) e.g. traditional RGB as well. The developed algorithms will belong to the family of 'deep learning' algorithms, a complex machine learning technique that has recently proven to provide a step-change in a number of computer vision applications. This will require a large dataset of annotated imagery for training and the expert knowledge on the image appearance which are available in Cefas. The student will also contribute to the deployment of Cefas sonars as the research progresses.

The NEXUSS CDT provides state-of-the-art, highly experiential training in the application and development of cutting-edge Smart and Autonomous Observing Systems for the environmental sciences, alongside comprehensive personal and professional development. There will be extensive opportunities for students to expand their multi-disciplinary outlook through interactions with a wide network of academic, research and industrial / government / policy partners. The student will be registered at University of East Anglia, hosted at School of Computing Sciences in the Graphics, Vision and Speech laboratory. The student will receive training in all areas relevant to the project including computer vision, machine learning as well as Matlab and Python programming. The student will spend periods of time at Cefas, Lowestoft and University of Southampton in order to familiarize with the images and the ecological aspects of the project.

Publications

10 25 50