Sparse linear models: Their existence and stability

Lead Research Organisation: University of Sheffield
Department Name: Computer Science

Abstract

Large quantities of data are gathered in many domains of life, including the medical, financial, retail, industrial and social media domains. This data must be analysed such that its properties can be extracted and the underlying system understood. It is necessary to distinguish between the quantity of data, which may be large, and the information contained in the data, which may allow it to be represented, with acceptable accuracy, by a simple model. This simple model captures fundamental properties of the system, such that it can be used for the determination of the response of the system on new (unseen) data. An example of a simple model is a low degree polynomial, but this proposal considers a sparse model, which is another example of a simple model. A sparse model of a system is a model in which the dominant input variables (predictors) that determine the output, rather than all the input variables, are identified. Genomics provides an example of a sparse model because there are about 30,000 genes in the human body, but not all genes are associated directly with cancer. It is therefore desirable to identify the genes that are most directly associated with cancer, such that treatment is focused on the dominant contributory factors, rather than factors whose role in the cause of cancer is minor.

Sparsity of the solution x of the linear algebraic equation Ax=b is imposed by regularisation in the 1-norm (the lasso). This is different from regularisation in the 2-norm (Tikhonov regularisation), which imposes stability on x. The lasso is not understood as well as Tikhonov regularisation because of the absence of a 1-norm matrix decomposition, but fundamental properties of a regularised solution of Ax=b are independent of the norm in which the regularisation is imposed. For example, a regularised solution in both norms must be stable and the error between it and the exact solution must be small. This proposal considers these properties of a regularised solution when regularisation by the lasso is used.

Computations on sparse models are, in general, simpler and faster than computations on exact dense models, which is advantageous, and important theoretical issues that must be addressed are considered in this proposal. A sparse model is an approximation of an exact dense model and there is therefore an error associated with a sparse model. A good sparse model is a model in which this error is small, and this sparse model is accepted because this small error is balanced by the greater physical insight allowed by a sparse model. Furthermore, a sparse model must be computationally reliable such that results derived from it are numerically stable, and thus a good sparse model must have a small error and be stable. It cannot, however, be assumed that all inputs yield an approximate input-output relationship that is sparse, stable and has a small error. It is therefore necessary to establish the class of inputs for which these properties are, and are not, satisfied. It follows that there are many issues to be considered before a sparse model can be used with confidence of its correctness. This proposal addresses these issues and it will include theoretical results and computational experiments.

The benefits of the proposed research extend to the many areas in which a sparse model is used to model an input-output relationship. These applications include the medical, financial, retail, industrial and social media domains (as stated above). Apart from the computational advantages of a sparse model (stated above), the desirability of a sparse model follows from its simplicity, and it is therefore easier to obtain a physical understanding of the input-output relationship of the system.

Publications

10 25 50
publication icon
Lakshmi M (2025) Numerical properties of solutions of LASSO regression in Applied Numerical Mathematics

publication icon
Mayur V. Lakshmi (2024) Numerical properties of solutions of LASSO regression in Applied Numerical Mathematics

 
Description One of the main objectives of the grant was the determination of a mathematical criterion to determine if the solution of a linear system, that is, a system represented by the equation Ax=b, has a good sparse approximation, that is, an approximation in which many of the components of x are zero. The main problem that arises is that the answer to this question requires the solution of a non-linear equation, but a simple assumption enables insight into the answer to be obtained. Further work then showed that the answer to the original question is no - not all solutions of the equation Ax=b have a good sparse approximation. This result has been extended and quantified.

Another issue of concern is the stability of the computed solution. A refined measure of stability has been developed and it has been shown theoretically and verified computationally that the stability of the solution of some, not all, problems cannot be estimated if the data are corrupted by noise. The property of the given data that allows a distinction to be made between these situations has been established. The practical implication of this result is severe because it suggests that the stability of the solution cannot necessarily be estimated if the data are corrupted by noise.
Exploitation Route The results of this work have a bearing on current work in machine learning because it was shown in the research that not all systems have a sparse representation. This is a new result because it has been assumed hitherto that all systems have a sparse representation.
Sectors Digital/Communication/Information Technologies (including Software)

Healthcare

Manufacturing

including Industrial Biotechology

 
Description Mathematical issues in neural networks 
Organisation Agency for Science, Technology and Research (A*STAR)
Department Bioinformatics institute (BII)
Country Singapore 
Sector Academic/University 
PI Contribution I visited the Institute of Bioinformatics, Singapore and gave two seminars on the research results from the grant. My host in Singapore visited me in Sheffield in June 2023 and he wants to invite me again to his laboratory to do more collaborative work.
Collaborator Contribution This collaboration allowed him to see my work. He was very interested in it and wants to pursue it further, with particular reference to its application to unresolved issues in deep neural networks.
Impact The main output has been a deeper relationship between me and my colleague in Singapore. In particular, he has submitted a research proposal and he wants to invite me to Singapore if it is awarded. Also, I will soon discuss with him a joint research proposal.
Start Year 2023
 
Description Visit to research institution in Singapore 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact I gave two seminars at the Institute of Bioinformatics in Singapore. This has led to a discussion on joint research, and a project has been identified. Preparatory work on this proposal is now underway.
Year(s) Of Engagement Activity 2023