Evaluation and parameterisation of individual-based models of animal populations

Lead Research Organisation: University of Bristol
Department Name: Mathematics

Abstract

Ecosystems are populated by autonomous, adaptive individuals, each figuring out its own ways of achieving its goals. It is a widely shared hope that the general principles governing such complex systems will eventually be understood from analysis of computer simulations known collectively as individual-based models (IBMs). IBMs are dynamical systems containing many autonomous interacting agents which are used where, broadly, the factors influencing the behaviour of individual agents are known, but interest centres on what happens at the population level. Will the population increase or decrease? How fast will be the response? Where practical management of ecosystems is required, many consider this can only be realistically performed with IBMs. Examples include conservation management of nature reserves and shell fisheries, assessment of environmental impacts of building proposals including wind farms and highways, management of fish stocks and assessment of the effects on non-target organisms of new chemicals for the control of agricultural pests. Articles in scientific journals have suggested IBMs are the only realistic way forward in diverse fields including economic analysis where the recent global 'credit crunch' might have been avoided with the use of such models. Thus IBMs are the only practicable method of modelling many complex systems where prediction is of vital importance to all.
Despite the widely-appreciated importance of IBMs, the evaluation of these very complex systems still leaves much to be desired. Clearly, the purpose of a model is to explain the world that we see around us. From a statistical point of view we wish to 'fit' the model to data. How can we do this? Recent advances in statistical theory, known as Approximate Bayesian Computation, ABC, suggest how this might be done. Implementation of ABC requires development of practical methods that will allow users to fit their IBM models to real data in an efficient manner. This Bayesian approach should allow calculation of distributions of possible parameter values in IBMs, given observations, and evaluation of whether one model is better than another. In this project we devise practical methods that will allow all makers of IBMs to validate their models properly by reference to relevant data. Provision of such methods is crucial if we are to have robust and reliable bases for making crucial decisions about environmental impacts, nature conservation, and the licensing of new chemicals for the control of agricultural pests.

Planned Impact

All those making individual-based models (IBMs) and using them for planning purposes will benefit. This includes planners and managers who use IBMs for conservation management of nature reserves and shell fisheries, assessment of environmental impacts of building proposals including wind farms and highways, management of fish stocks and assessment of the effects on non-target organisms of new chemicals for the control of agricultural pests. Examples of organisations who use IBMs are the UK Chemicals Regulation Directorate, Syngenta, Bayer, Dow Agrochemicals, and the Environment Agency. Other areas may eventually benefit - over the past 30 years ecology has produced over a thousand IBMs. Individual-based modelling is the only modeling method that can be used where explicit spatial landscapes affect population dynamics. As suggested in recent commentaries in Nature there is a view that current economic modelling methods are unsuitable and that individual-based modelling is the way forward.
All IBM modellers need to validate their models to show that they are reliable. The principal obstacle to validation is lack of a sound statistical methodology. Currently modellers rely on Pattern-Oriented Modelling but this has many drawbacks (lack of standard errors, ambiguity about best model) and modellers would prefer to have methods of evaluation like those used in statistics, where statistical tests are available to guide choice between models, statistics such as R2 or deviance can be used to assess overall goodness of fit, and parameters can be calibrated from data and confidence intervals reported using least-squares or maximum-likelihood procedures.
The proposal has been designed to meet these needs. Our research programme addresses two key areas of concern in IBMs: parameterisation and model validation. By providing a method of solving these problems, ecology as a whole will benefit through more accurate modelling of ecosystems with potential for improved forecasting, management and environmental risk assessment. In the longer term the methods developed may also allow more accurate prediction of the future performance of the economy and geographyical systems. We will create and maintain a website detailing algorithms, scripts, programs and manuals to allow widespread implementation of our methodology by IBM modellers.
We hope the benefits described here will be fully realised within four years. Our research will contribute by implementing and demonstrating practical methods for parameterising and validating IBMs. This will facilitate the production of credible and trustworthy models for use throughout environmental planning. Environmental planning contributes to the nation's health by regulating the chemicals used in agriculture and to the nature's culture and quality of life through improved conservation and management of nature reserves and sites of special scientific interest.

Publications

10 25 50
publication icon
Pagel M (2019) Dominant words rise to the top by positive frequency-dependent selection. in Proceedings of the National Academy of Sciences of the United States of America

 
Description This is a project studentship associated with the original award (lead institution is Reading).
The current main findings during this studentship are:
1) We have developed a novel algorithm for improving the accuracy of approximate Bayesian computation (ABC). This algorithm is based on the sequential ABC algorithm of Del Moral (Statistics and Computing, 2012), but we also include improvements due to an algorithm of Fearnhead and Prangle (RSSB, 2012), which we update in a sequential way. We demonstrate the improvements arising from this new algorithm through extensive simulation tests under a number of different scenarios.
2) We have also developed an iterative algorithm based on synthetic likelihood (Woods, Nature, 2010), and we demonstrate that it works using toy examples.
Exploitation Route Both algorithms that we have developed can be taken forward. In particular the synthetic likelihood algorithm is still at the toy stage and could well form the basis for a future project. We are aiming to write a manuscript on the new sequential ABC algorithm.
Sectors Agriculture, Food and Drink,Environment