New Frontiers in Parameterizing Away from Triviality

Lead Research Organisation: University of Warwick
Department Name: Computer Science

Abstract

Graphs are a commonly used model of real-world systems. A graph has two types of elements: vertices and edges. The vertices represent individual components of a system and the edges represent links or relationships between these components. The ubiquity of graph models across various domains has led to the design and analysis of graph algorithms growing into a rich subarea of Computer Science and Discrete Mathematics. Unfortunately, many relevant graph problems that occur in practice are NP-hard in theory (essentially implying that they are unlikely to have efficient algorithms). This motivates the search for special structure that makes the problem easy to solve on those inputs possessing such structure. In fact, there is a vast amount of research devoted to identifying specific families of graphs on which certain NP-hard graph problems can be solved efficiently. Thus, if the given model happens to fall into any of these families (called graph classes), then one can solve various problems on it efficiently. Some well-known graph classes are -- complete graphs (each pair of vertices are linked by an edge), bounded-degree graphs (every vertex is linked to a small number of other vertices), low-diameter graphs (every vertex can be reached from any vertex by following few edges).

Unfortunately, although networks observed in the real world have some structural characteristics that seem exploitable, they usually do not fall cleanly into rigidly defined graph classes obtained from theoretical models. But what if an observed network is not contained in a specific graph class, but resembles the graphs in this class, except for a few differences? Could one lift efficient algorithms that work on this graph class, to efficient algorithms for those graphs that are not contained within the classes, yet are still reasonably "close" to it? For example, suppose that we want to query for a smallest set of vertices that are cumulatively incident on all the edges in our graph. If our graph is a complete graph, then this query can be answered efficiently. But what if our graph is "close" to complete, that is, except for a few pairs of vertices, every other pair of vertices has an edge linking them? Could the same query still be answered efficiently? The answer turns out to be, yes!

Thus, in the last decade, there has been a systematic effort to lift efficient algorithms from graph classes to graphs close to the classes. A key challenge in this effort is to understand what "close" means. A common way of mathematically modelling the proximity of a graph to a graph class is via various notions of graph edit distance. These are simply the number of vertices and/or edges one needs to add/remove (called edit operations) in the given graph, to reach some graph in the graph class. These notions of graph edit distance have been instrumental in the design of efficient algorithms, especially in the active subarea of algorithmics called Parameterized Complexity.

This project aims to take a significant stride forward in this area by formally introducing new and more complex notions of graph edit distance and studying their impact on efficient solvability of NP-hard problems. These notions of edit distance will depend not on the number of edit operations as they traditionally have, but on their inherent structure. The success of parameterized complexity has shown that studying the structure of relevant objects as opposed to only considering their size, can have a powerful impact on efficient solvability of computational problems. In summary, this project will lay the foundations for an advanced algorithmic theory built on "structural edit distances" between graphs, develop new algorithms for fundamental problems and uncover hitherto unknown efficiently solvable instances. The project deals with a fundamental research topic in algorithmics and will contribute towards major advances in this subarea of theoretical computer science.

Publications

10 25 50
 
Description In the last two decades, there has been an extensive research effort dedicated to the design of graph algorithms that are guaranteed to perform efficiently as long as the graph modeling the input is sufficiently "close" to a well-behaved graph class. That is, there is a small amount of noise in the input, the removal of which gives us a graph with various structural properties that can be algorithmically exploited.

In the project, we have made a key advance in the notion of "closeness" that still allows for the design of efficient graph algorithms. Specifically, we proved a general result that states that if one can design an efficient algorithm for inputs that are close to a well-behaved graph class in the way described above, then such an algorithm can immediately be improved to run efficiently on inputs that were considered intractable (i.e., cannot be solved efficiently) under the previous state of the art. This is because we were able to extend the current set of techniques to inputs which can now be considered "close" under more complex notions of distance between graphs.
Exploitation Route The current set of outputs from the project primarily have academic impact.

The scientific output presented at high impact venues in the field, collaborative research efforts and dissemination activities have contributed towards strengthening UK research in algorithms and complexity.
Sectors Other