# Kernel methods for approximation and learning theory

Lead Research Organisation:
University of Leicester

Department Name: Mathematics

### Abstract

In many different situations we are given information at some unstructured set of points, and asked to deduce something concrete from the data. In approximation theory we tyypically try to develop an algorithm which will recover the information at all points in a certain set, given a finite set of information. We then provide estimates for the error of approximation.We can ask other questions. Given a certain amount of information and a given set of functions, what is the best we can expect to do in terms of error using a linear process (if we approximate a combination of functions we obtain the same approximation as if we had individually approximated each component of the combination, and combine these in the same proportion). Such consideration lies in the field of n-widths, and it clearly of importance to know if any particular algorithm realises the best possible error. Closely related to the notion of n-width is that of entropy, which deals with the number of sets of a certain size it would take to cover a fixed set (for instance, how many circles of radius 1 does it take to cover a circle of radius r).In machine learning we are often interested in the probability of obtaining a good estimate. It maybe that there are a very small number of bad functions in my set, and n-width gives the worst case scenario. The average case might be much better, so an estimate of probability of a reasonable answer is useful information.We intend to investigate these issues when the information we have lies on some sort of smooth manifold, like a sphere. We would like to write a research monograph which will allow people who do not have much experience in this area to get to the stage where they could do research, and in a self-contained way. This will require the collection of theory from a wide range of subjects in mathematics.

### Planned Impact

This research will 1. Provide theoretical underpinnings for the development of optimal algorithms in approximation. Such theory is important because it allows a practitioner to know if they could, in principle, do better than they are seeing from the current algorithm. 2. Provide theoretical justification for the use of certain estimates, in terms of the probability of achieving a given accuracy. In many situations this expected error is a more useful measure than the worst case scenario above. Both of these points are important when it is important to be able to gauge the error committed in a certain procedure. In machine learning it is crucial to understand how well one can expect a certain process to perform, and this research will help to reassure users that their particular system is reliable. Potential beneficiaries of this work in the industrial sector are in financial institutions, where high dimensional approximation and integration are becoming more important, as efforts to model higher dimensional, more complex data sets. Similarly, bioinformatics provides a rich supply of problems (for instance in data mining of genetic material) which require the sort of technology we are developing in this grant. We will also start work on a text book which should hopefully become a standard text for work in this area. We would hope to make it accessible to practitioners who wish to understand the theoretical underpinnings of their algorithms. We will disseminate information via a workshop which is part of Algorithms for Approximation VI, in Autumn 2009. We have invited the Smith Institute to help us to develop links with industry as part of this conference. We have also communicated with EPSRC (Anita Howman) with regards to helping us draft the Economic Impact sections of grant applications we hope will arise from this conference. We intend to have a second workshop towards the end of the grant period, and we would seek colleagues from industry (hopefully coming from the work done in Algorithms for Approximation VI) to come to this workshop to share in the results of our activities.

## People |
## ORCID iD |

Jeremy Levesley (Principal Investigator) |

### Publications

Alexander Kushpel (Co-Author)
(2012)

*Approximation on the complex sphere*
Jeremy Levesley (Co-Author)
(2012)

*A Multiplier Version of the Bernstein Inequality on the*
Jeremy Levesley (Co-Author)
(2012)

*ENTROPY AND n-WIDTHS OF SOBOLEV CLASSES ON*Description | We have developed analysis of approximation methods but were not able to apply in learning theory as yet. |

Exploitation Route | The work can be used by those in approximation theory. |

Sectors | Aerospace, Defence and Marine,Electronics,Energy,Environment,Financial Services, and Management Consultancy |

Description | Yes, citations of outputs by another author. |

First Year Of Impact | 2014 |

Sector | Education |