Advancing Machine Learning Methodology for New Classes of Prediction Problems

Lead Research Organisation: University of Glasgow
Department Name: School of Computing Science

Abstract

The last few decades have seen enormous progress in the development of machine learning and pattern recognition algorithms for data classification. This has resulted in considerable advances in a number of applied fields, with some of these algorithms forming the core of ubiquitous deployed technologies. However there exist very many important applications, for example in biomedicine, which are highly non-standard prediction problems, and there is an urgent need to develop appropriate & effective classification techniques for such applications. For example, at NIPS2006 Girolami & Zhong reported state of the art prediction accuracy for a protein fold classification problem which stands at a modest 62%. While this may partly be due to overlaps between classes of fold, it is also clear that some of the fundamental assumptions made by most classification algorithms are not valid in this application. In particular, most algorithms make some assumptions on the structure of the data that are not met in reality: data (both training and test) is independent and identically distributed (i.i.d) from the same distribution, labels are unbiased (i.e. the relative proportions of positive and negative examples are approximately balanced) and the presence of labeling noise both on the input data and on the labels can be largely ignored. Recent advances in Machine Learning, such as kernel based methods and the availability of efficient computational methods for Bayesian inference, hold great promise that classification problems in non-standard situations can be addressed in a principled way. The development of effective classification tools is all the more urgent given the daunting pace at which technological advances are producing novel data sets. This is particularly true in the life sciences, where advances in molecular biology and proteomics are leading to the production of vast amounts of data, necessitating the development of methods for high-throughput automated analysis. Improving classification accuracy may lead to the removal of what is currently the bottleneck in the analysis of this type of data, leading to real impact in furthering biomedical research and in the life quality of millions of people. At present most classifiers used in life sciences applications, especially those deployed as bioinformatics web services, adopt & adapt traditional Machine Learning approaches, quite often in an ad hoc manner, e.g. employing Artificial Neural Networks & Support Vector Machines. However, in reality many of these applications are highly non-standard classification problems in the sense that a number of the fundamental underlying assumptions of pattern classification and decision theory (e.g. identical sampling distributions for 'training' and 'test' data, perfect noiseless labeling in the discrete case, object representations which can be embedded in a common feature space) are violated and this has a direct and potentially highly negative impact on achievable performance. To make much needed & significant progress on a wide range of important applications there is an urgent requirement to systematically address the associated methodological issues within a common framework and this is what motivates the current proposal.

Publications

10 25 50
publication icon
Betancourt Michael (2011) The Geometry of Hamiltonian Monte Carlo in arXiv e-prints

publication icon
Damoulas T (2009) Combining feature spaces for classification in Pattern Recognition

publication icon
Damoulas T (2009) Pattern recognition with a Bayesian kernel combination machine in Pattern Recognition Letters

publication icon
Girolami M (2011) Riemann Manifold Langevin and Hamiltonian Monte Carlo Methods in Journal of the Royal Statistical Society Series B: Statistical Methodology

publication icon
Girolami Mark (2011) A First Course in Machine Learning

 
Description The last few decades have seen enormous progress in the development of machine learning and pattern recognition algorithms for data classification. This has resulted in considerable advances in a number of applied fields, with some of these algorithms forming the core of ubiquitous deployed technologies. However there exist very many important applications, for example in biomedicine, which are highly non-standard prediction problems, and there is an urgent need to develop appropriate & effective classification techniques for such applications. For example, at NIPS2006 Girolami & Zhong reported state of the art prediction accuracy for a protein fold classification problem which stands at a modest 62%. While this may partly be due to overlaps between classes of fold, it is also clear that some of the fundamental assumptions made by most classification algorithms are not valid in this application. In particular, most algorithms make some assumptions on the structure of the data that are not met in reality: data (both training and test) is independent and identically distributed (i.i.d) from the same distribution, labels are unbiased (i.e. the relative proportions of positive and negative examples are approximately balanced) and the presence of labeling noise both on the input data and on the labels can be largely ignored. Recent advances in Machine Learning, such as kernel based methods and the availability of efficient computational methods for Bayesian inference, hold great promise that classification problems in non-standard situations can be addressed in a principled way. The development of effective classification tools is all the more urgent given the daunting pace at which technological advances are producing novel data sets. This is particularly true in the life sciences, where advances in molecular biology and proteomics are leading to the production of vast amounts of data, necessitating the development of methods for high-throughput automated analysis. Improving classification accuracy may lead to the removal of what is currently the bottleneck in the analysis of this type of data, leading to real impact in furthering biomedical research and in the life quality of millions of people. At present most classifiers used in life sciences applications, especially those deployed as bioinformatics web services, adopt & adapt traditional Machine Learning approaches, quite often in an ad hoc manner, e.g. employing Artificial Neural Networks & Support Vector Machines. However, in reality many of these applications are highly non-standard classification problems in the sense that a number of the fundamental underlying assumptions of pattern classification and decision theory (e.g. identical sampling distributions for 'training' and 'test' data, perfect noiseless labeling in the discrete case, object representations which can be embedded in a common feature space) are violated and this has a direct and potentially highly negative impact on achievable performance. To make much needed & significant progress on a wide range of important applications there is an urgent requirement to systematically address the associated methodological issues within a common framework and this is what motivates the current proposal.
Exploitation Route A number of novel ML methodologies have emerged from this work as described in the publications list.
Sectors Healthcare