Algebraic complexity theory via the algebraic geometry and representation theory of generalised continued fractions

Lead Research Organisation: University of Liverpool
Department Name: Computer Science

Abstract

High-end computers can solve extremely difficult tasks, but even the best computers have limits in what they can do. These limits are studied in the field of computational complexity theory. The most famous question in computational complexity theory is the P vs NP conjecture, where a price of $1,000,000 will be awarded to the person or group who resolves it. The question can be phrased in terms of Boolean circuits and algorithms as follows. A Boolean circuit is a description of a finite sequence of logical operations that is performed a list of input values, much like a computer program, but with no loops. Given a Boolean circuit we want to know if there exists an input that makes it output TRUE. The P vs NP conjecture states that there is no efficient algorithm for this task. More precisely, the computation time required to answer this question for a circuit that is described using n symbols grows faster than any polynomial in n: Superpolynomially fast. This is independent of the algorithm that is used to answer the question. It is generally assumed among researchers that we are very far from resolving the P vs NP conjecture though. Instead of studying Boolean circuits many algorithmic breakthroughs are based on algebraic insights, for example the Euclidean algorithm, the effective Chinese Remainder Theorem, Gaussian elimination, Newton iteration, fast matrix multiplication, fast polynomial multiplication, fast Fourier transform, Elliptic Curve Cryptography, Reed-Solomon codes, or solving systems of polynomial equations via Gröbner bases. One more important example is Horner's method, a way of evaluating polynomials in a single variable efficiently. In 1966 Victor Pan proved that Horner's method is optimal if we count the number of arithmetic operations instead of Boolean operations. Using this way of measuring the complexity of a problem, in 1979 Leslie Valiant FSR defined algebraic analogues of the P vs NP problem. In this proposal we focus on Valiant's VF vs VNP conjecture. This conjecture says that the required size of arithmetic formulas counting the number of Hamiltonian cycles in directed graphs, a problem from combinatorics, grows superpolynomially. This is closely related to the P vs NP conjecture: If P equals NP, then the so-called polynomial hierarchy collapses, which most complexity theorists think does not happen. If one proves that this collapse does not happen, then this also implies that VF is not equal to VNP by a result of Karp and Lipton in 1980, so in a sense the VF vs VNP conjecture has to be resolved in order to make progress.

We hope to make progress on the VF vs VNP conjecture, because a recent paper (Bringmann, Ikenmeyer, Zuiddam 2017) shows a deep connection between arithmetic formula size and a multivariate (i.e., several variables) version of Horner's method. This means that separating VF and VNP boils down to understanding the multivariate Horner's method on a deep level. It has connections to continued fractions and the continuant polynomial, which was recently studied by Charles Conley and Valentin Ovsienko (2018). Other than their work, only preliminary algebraic, geometric, and algorithmic results are known so far (very recently by Shpilka and Medini) and we aim to provide the missing foundations. We aim to connect our new insights in several ways with Geometric Complexity Theory, a mathematical approach for complexity lower bounds that uses algebraic geometry and representation theory and that was founded in 2001 by Ketan Mulmuley and Milind Sohoni. This will lead to new ways of proving lower bounds on arithmetic formula size, which then ultimately can help to separate VF and VNP.

Publications

10 25 50