Additive Combinatorics

Lead Research Organisation: University of Oxford
Department Name: Mathematical Institute

Abstract

Over the course of my DPhil, I intend to focus on combinatorial problems with an additive or number theoretic flavour. Initially, a problem I plan to spend some time thinking about is motivated by a recent result by Fox and Lovász (refining a theorem by Green) stating that subsets of large finite vector spaces with few additive triples x + y = z can be reduced in size by a small amount to remove all additive triples. Green's original result gave weak bounds for the amount the subset must be reduced, and Fox and Lovász improved these bounds dramatically. Green's original paper applied to arbitrary abelian groups, whereas Fox and Lovasz' work required the group to be a vector space. The problem, then, is to see if any progress can be made applying these ideas to arbitrary abelian groups. In particular, the method Fox and Lovász used involved a result of Kleinberg, which gave an upper bound for the size of sets X, Y and Z with the property that each element of one of those sets is in exactly one additive triple of the form x + y = z with each term taken from the obvious set. Kleinberg's result came with polynomial bounds, leading to polynomial bounds in the removal lemma. Such a result is provably too strong for arbitrary abelian groups, but a potential approach would be to see how a modification of Kleinberg's result could impact the result on removing additive triples in general groups.

A second problem I intend to think about relates to so-called Generalised Von Neumann theorems. These theorems are results that show that if the average product of some functions applied to numbers which satisfy a linear relation is large, then that must be because one of the functions has a large Gowers norm. The obvious next step is to consider how this is affected when, instead of looking at numbers satisfying a linear relation, we consider numbers satisfying a quadratic form, and, more specifically, what the correct analogue of the Gowers norm should be in this case. A particularly pertinent example of such functions would be to take the indicator function of the set of primes; in that case, bounds for this problem would transfer to statements about when there are solutions to the quadratic form in which all of the coordinates are prime.

These results all fall under the general umbrella of additive combinatorics, which has strong links to analytic number theory. In particular, they can tell us facts about prime numbers, which have great importance to fields such as cryptography and computer science, as well as being interesting in their own right.


This project falls within the EPSRC Number Theory research area.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/N509711/1 01/10/2016 30/09/2021
1789896 Studentship EP/N509711/1 01/10/2016 31/03/2020 James Aaronson
 
Description Firstly, in my original research statement, I said that I would think about the relationship between matchings and removal in cyclic groups. The idea is that we have a set A of elements in a cyclic group, and we assume that there are not many triples (a, b, c) in A which sum to zero. A result due to Green says that in this situation, we don't have to remove too many elements of our set in order to destroy all such triples. A related problem concerns so-called additive matchings, which are essentially three sets X, Y and Z where each element of one of the sets is in exactly one triple x+y+z=0. In finite vector spaces, strong bounds on the largest additive matching are known, and Fox and Lovász found an argument that uses this to give good bounds on the difficulty of the triangle problem. Their argument doesn't obviously apply to the case of cyclic groups, and the good bounds on additive matchings are not known. However, I managed to find a way to make their argument work in cyclic groups; in other words, if we could improve the best upper bounds on the size of an additive matching by a modest amount, that would lead to massive gains in the question of the difficulty of the triangle removal problem.

Secondly, I've considered a problem relating to solutions to linear 3-variable equations in sets of integers. The idea is that we are given an equation, say x + 2y = 3z, and we want to find a set of integers with loads of solutions to that equation. For the equation x + 2y = 3z, we might try the numbers from 1 up to N, for some large N; this gives roughly (N^2)/3 solutions. However, this gets worse as the equation gets larger; for example, 123x + 67y = 190z. Then, the numbers from 1 up to N have only about (N^2)/190 solutions, which is quite small. My first result on this area is that for any equation, there's a set with about (N^2)/12 solutions, which is rather larger when equation is large, and my second is that the value of 1/12 is optimal; in other words, there's an equation such that no large set has more than about (1/12 + 0.00001)N^2 solutions.
Exploitation Route The most obvious way in which these results could be useful is in the problem of good bounds on removal. Also, some of the lemmas used in the argument to solve the linear equations problem could be useful in related problems; for example, it is not known whether or not there exists a set better than [N] for the equation x + 2y = 3z, but some of the lemmas might help to prove this.
Sectors Other