Algebraic complexity theory via the algebraic geometry and representation theory of generalised continued fractions

Lead Research Organisation: University of Liverpool
Department Name: Computer Science

Abstract

High-end computers can solve extremely difficult tasks, but even the best computers have limits in what they can do. These limits are studied in the field of computational complexity theory. The most famous question in computational complexity theory is the P vs NP conjecture, where a price of $1,000,000 will be awarded to the person or group who resolves it. The question can be phrased in terms of Boolean circuits and algorithms as follows. A Boolean circuit is a description of a finite sequence of logical operations that is performed a list of input values, much like a computer program, but with no loops. Given a Boolean circuit we want to know if there exists an input that makes it output TRUE. The P vs NP conjecture states that there is no efficient algorithm for this task. More precisely, the computation time required to answer this question for a circuit that is described using n symbols grows faster than any polynomial in n: Superpolynomially fast. This is independent of the algorithm that is used to answer the question. It is generally assumed among researchers that we are very far from resolving the P vs NP conjecture though. Instead of studying Boolean circuits many algorithmic breakthroughs are based on algebraic insights, for example the Euclidean algorithm, the effective Chinese Remainder Theorem, Gaussian elimination, Newton iteration, fast matrix multiplication, fast polynomial multiplication, fast Fourier transform, Elliptic Curve Cryptography, Reed-Solomon codes, or solving systems of polynomial equations via Gröbner bases. One more important example is Horner's method, a way of evaluating polynomials in a single variable efficiently. In 1966 Victor Pan proved that Horner's method is optimal if we count the number of arithmetic operations instead of Boolean operations. Using this way of measuring the complexity of a problem, in 1979 Leslie Valiant FSR defined algebraic analogues of the P vs NP problem. In this proposal we focus on Valiant's VF vs VNP conjecture. This conjecture says that the required size of arithmetic formulas counting the number of Hamiltonian cycles in directed graphs, a problem from combinatorics, grows superpolynomially. This is closely related to the P vs NP conjecture: If P equals NP, then the so-called polynomial hierarchy collapses, which most complexity theorists think does not happen. If one proves that this collapse does not happen, then this also implies that VF is not equal to VNP by a result of Karp and Lipton in 1980, so in a sense the VF vs VNP conjecture has to be resolved in order to make progress.

We hope to make progress on the VF vs VNP conjecture, because a recent paper (Bringmann, Ikenmeyer, Zuiddam 2017) shows a deep connection between arithmetic formula size and a multivariate (i.e., several variables) version of Horner's method. This means that separating VF and VNP boils down to understanding the multivariate Horner's method on a deep level. It has connections to continued fractions and the continuant polynomial, which was recently studied by Charles Conley and Valentin Ovsienko (2018). Other than their work, only preliminary algebraic, geometric, and algorithmic results are known so far (very recently by Shpilka and Medini) and we aim to provide the missing foundations. We aim to connect our new insights in several ways with Geometric Complexity Theory, a mathematical approach for complexity lower bounds that uses algebraic geometry and representation theory and that was founded in 2001 by Ketan Mulmuley and Milind Sohoni. This will lead to new ways of proving lower bounds on arithmetic formula size, which then ultimately can help to separate VF and VNP.
 
Description In this reporting period we made great progress in task T2.3 Power and Limitations of Representation Theory.
A central open question in the geometric complexity theory approach is whether the representation theoretic multiplicities behave as nicely as the well-known Littlewood-Richardson coefficients. These count the number of Littlewood-Richardson coefficients of certain types. To determine the multiplicities, one would usually want a non-negative combinatorial description as for the Littlewood-Richardson coefficients. We formalised this as the membership of the coefficient function to the complexity class #P. In a series of papers we were able to show (1) general methods for non-membership of #P, and (2) an explicit example from representation theory, where a number is not in #P, namely the square of the character of the symmetric group (unless the polynomial hierarchy collapses).

This way of formalising the notion of a "non-negative combinatorial interpretation" as membership to #P and using tools from computational complexity to study it seems to be a new and very fruitful research direction. The general idea of formalising questions via P, FP, and #P is versatile and can be applied to other notions in algebraic combinatorics as well. There is a significant potential of future of research collaborations between computational complexity theory researchers and algebraic combinatorics researchers.

We also made progress on WP2 "Lower Bounds via Geometry and Representation Theory" by refining lower bounds techniques to take into account strength decompositions of polynomials. This lower bounds method is based on geometric properties of the hypersurface defined by the polynomial under consideration.

The grant is still ongoing, but all researchers moved from the University of Liverpool to the University of Warwick, see EP/W014882/2.
Exploitation Route The grant is still ongoing, see EP/W014882/2. We will have a clear picture at the end of EP/W014882/2.
Sectors Digital/Communication/Information Technologies (including Software)