LAMBDA: Learning Algorithms for Modularity in Broad and Deep Architectures
Lead Research Organisation:
University of Manchester
Department Name: Computer Science
Abstract
Do you know how Facebook recognises faces in images? Do you know how your iPhone understands your speech? The secret behind each of these is a technology called "Deep Learning", which uses biologically-inspired algorithms called "neural networks".
Over the next decade, society will become more reliant on this technology. But... these algorithms require an IMMENSE amount of computing power, and therefore electricity, and for example can take many weeks to learn a given task.
The LAMBDA project explores an approach to deep learning which is not just deep, but also broad - hence "Learning Algorithms for Broad and Deep Architectures". We aim to (1) make "broad" models that are faster/easier to learn, and as a consequence (2) reduce the energy consumption. Our approach builds upon previous award winning research by the PI, in exactly this area. If successful, we will be able to reproduce the same abilities as current deep neural networks, but with a significantly reduced energy consumption, and whilst learning such architectures a significantly easier task for scientists.
Over the next decade, society will become more reliant on this technology. But... these algorithms require an IMMENSE amount of computing power, and therefore electricity, and for example can take many weeks to learn a given task.
The LAMBDA project explores an approach to deep learning which is not just deep, but also broad - hence "Learning Algorithms for Broad and Deep Architectures". We aim to (1) make "broad" models that are faster/easier to learn, and as a consequence (2) reduce the energy consumption. Our approach builds upon previous award winning research by the PI, in exactly this area. If successful, we will be able to reproduce the same abilities as current deep neural networks, but with a significantly reduced energy consumption, and whilst learning such architectures a significantly easier task for scientists.
Planned Impact
Who might benefit from this research and how?
===
Social Impact
=========
Machine Learning is an "enabling" technology --- it underlies and glues together numerous services that the public use every day. ML is used in search engines, in social networking sites, in mobile phones, in banks and more. As these systems become more ubiquitous, they become more a part of our everyday existence. This project is aimed at fundamental enhancements to the underlying technology, making it easier for these services to be deployed in every day life, most especially on small mobile devices such as phones, smart-watches or even tiny embedded devices in clothing. We will develop new types of computational machine learning methods which both learn faster, and consume less electricity during the process. Further details of our strategies for this can be found in our Pathways to Impact document.
Economic Impact
==========
Large companies such as Google/Facebook are known to deploy their compute services in northern Alaska, or anywhere very very cold or near a hydroelectric dam, in order to reduce their overall electricity bill. If we are to have this sort of technology available to smaller companies, not large conglomerates who can afford this, we need to find a way to reduce energy by modifying the algorithms themselves. LAMBDA moves us in this direction. The results of LAMBDA will be new algorithms, frameworks and experiments showing how much more efficient and effective training of learning systems can be when we consider modular (broad) architectures. Our potential impact is enhanced by partnership with the SpiNNaker project (part of the 1 billion Euro Human Brain Project), and with the world's leading low-energy chip designer, ARM Ltd. Further details of our strategies for this can be found in our Pathways to Impact document.
===
Social Impact
=========
Machine Learning is an "enabling" technology --- it underlies and glues together numerous services that the public use every day. ML is used in search engines, in social networking sites, in mobile phones, in banks and more. As these systems become more ubiquitous, they become more a part of our everyday existence. This project is aimed at fundamental enhancements to the underlying technology, making it easier for these services to be deployed in every day life, most especially on small mobile devices such as phones, smart-watches or even tiny embedded devices in clothing. We will develop new types of computational machine learning methods which both learn faster, and consume less electricity during the process. Further details of our strategies for this can be found in our Pathways to Impact document.
Economic Impact
==========
Large companies such as Google/Facebook are known to deploy their compute services in northern Alaska, or anywhere very very cold or near a hydroelectric dam, in order to reduce their overall electricity bill. If we are to have this sort of technology available to smaller companies, not large conglomerates who can afford this, we need to find a way to reduce energy by modifying the algorithms themselves. LAMBDA moves us in this direction. The results of LAMBDA will be new algorithms, frameworks and experiments showing how much more efficient and effective training of learning systems can be when we consider modular (broad) architectures. Our potential impact is enhanced by partnership with the SpiNNaker project (part of the 1 billion Euro Human Brain Project), and with the world's leading low-energy chip designer, ARM Ltd. Further details of our strategies for this can be found in our Pathways to Impact document.
Publications
Reeve Henry WJ
(2018)
The K-Nearest Neighbour UCB algorithm for multi-armed bandits with covariates
in arXiv e-prints
Sechidis K
(2018)
Distinguishing prognostic and predictive biomarkers: an information theoretic approach.
in Bioinformatics (Oxford, England)
Sechidis K
(2018)
Distinguishing prognostic and predictive biomarkers: an information theoretic approach.
in Bioinformatics (Oxford, England)
Tousi A
(2022)
Comparative Analysis of Machine Learning Models for Performance Prediction of the SPEC Benchmarks
in IEEE Access
Bolón-Canedo V
(2019)
Insights into distributed feature ranking
in Information Sciences
Sechidis K
(2017)
Dealing with under-reported variables: An information theoretic solution
in International Journal of Approximate Reasoning
Nogueira S.
(2018)
On the stability of feature selection algorithms
in Journal of Machine Learning Research
Wood Danny
(2023)
A Unified Theory of Diversity in Ensemble Learning
in JOURNAL OF MACHINE LEARNING RESEARCH
Sechidis K
(2018)
Simple strategies for semi-supervised feature selection.
in Machine learning
Sechidis K
(2019)
Efficient feature selection using shrinkage estimators
in Machine Learning
Nikolaou N
(2016)
Cost-sensitive boosting algorithms: Do we really need them?
in Machine Learning
Ozer E
(2020)
A hardwired machine learning processing engine fabricated with submicron metal-oxide thin-film transistors on a flexible substrate
in Nature Electronics
Reeve H
(2018)
Diversity and degrees of freedom in regression ensembles
in Neurocomputing
Christou V
(2019)
Hybrid extreme learning machine approach for heterogeneous neural networks
in Neurocomputing
Reeve H.W.J.
(2018)
The k-Nearest Neighbour UCB Algorithm for Multi-Armed Bandits with Covariates
in Proceedings of Machine Learning Research
Turner E
(2019)
Dashing Hopes? the Predictive Accuracy of Domestic Abuse Risk Assessment by Police
in The British Journal of Criminology
Marshall JAR
(2017)
Individual Confidence-Weighting and Group Decision-Making.
in Trends in ecology & evolution
Iordanou K
(2023)
Tiny Classifier Circuits: Evolving Accelerators for Tabular Data
Wood D
(2023)
A Unified Theory of Diversity in Ensemble Learning
Danny Wood
(2022)
Bias-Variance Decompositions for Margin Losses
Callaghan G
(2020)
Optimising dynamic binary modification across 64-bit Arm microarchitectures
Description | Ensembles are a widely used technique in Machine Learning, forming committees of models. Inspired by analogy to human committees, it is often the case that "diverse" committees are constructed, attempting to make models have differing opinions on a question. This is an appealing anthropomorphism, invoking metaphors to human committees, and "wisdom of the crowds". Unfortunately, such metaphors have limitations. However, a precise mathematical formulation of this "diversity" phenomenon has been a challenging open question in ML for over 30 years, referred to as the "holy grail" of the field. We have presented a new mathematical theory which arguably solves this question in a very wide range of learning scenarios. |
Exploitation Route | The work will be useful to ML theorists and practitioners, who use ensemble methods. In turn, this will benefit consumers in downstream applications. |
Sectors | Other |
URL | https://arxiv.org/abs/2301.03962 |
Description | EPSRC Doctoral Prize Fellowship |
Amount | £25,000 (GBP) |
Organisation | Engineering and Physical Sciences Research Council (EPSRC) |
Sector | Public |
Country | United Kingdom |
Start | 12/2016 |
End | 11/2017 |