But more salient than that particular gap was the overall behavior of the two formulas. Plot the fraction of elements between 1 and N that each formula represents, and you’ll see Behrend’s number rapidly shrink to zero as N grows. Roth’s fraction, on the other hand, slides toward zero, but slowly and gently. The two curves are very different shapes, and the true proportion of elements lying in a set without arithmetic progressions could, so far as mathematicians knew, lie anywhere between them.
Beginning in the 1980s, “there was a long sequence of, in hindsight, fairly incremental improvements by a large number of really famous mathematicians,” Green said. Every once in a while, someone would nudge Roth’s upper limit down by a hair or two, and eventually it got considerably lower. Behrend’s lower bound, by contrast, didn’t budge for decades. Mathematicians began to think that Behrend might not have been far from the true answer, Bloom said.
Until Kelley and Meka’s paper arrived in early 2023, the maximum size of a progression-free set was penned in from below by Behrend’s formula, and from above by Bloom and Sisask’s. Bloom and Sisask’s paper from July 2020 had crossed the critical “logarithmic” threshold by showing that a progression-free set must have substantially fewer than N/(log N) elements. But their result still sat high above Behrend’s. Kelley and Meka’s new upper bound is drastically closer to the floor set by Behrend.
“Meka and Kelley have sort of leapfrogged all this incremental progress,” said Terence Tao, a prominent mathematician at UCLA.
Their formula is almost the same as Behrend’s, with only a few parameters tweaked. As N approaches infinity, a plot of Kelley and Meka’s formula will eventually settle into a curve that resembles the Behrend curve. “Any bound of that shape just seemed like an impossible dream before,” Bloom said.
“I was really just quite staggered that they had made such an improvement,” Green said.
A Different Tack
Though Kelley and Meka had never fully ventured into pure mathematics research before, arithmetic progressions were familiar to them when they started. In general, computer scientists “are hungrily looking outward for techniques that would work to solve our problems,” Kelley said. The tools historically used to study the size of a progression-free set have become widely used in the computer science subfield of complexity theory. The problem of narrowing down the size of such a set is well-known to complexity theorists as a quintessential example of applying techniques that probe the inner structure of sets.
In late 2021, Kelley and Meka were analyzing the chances that a team of players in a certain cooperative game would be able to win, a standard type of computer science problem. It occurred to them that techniques from research on the size of progression-free sets might be helpful. But they found it easier to directly study those techniques than to apply them to the cooperative game. “My best idea for how to make progress on this problem [was] to actually improve the tool itself, not to use it in a more clever way,” Kelley said.
“At some point, we just decided to work on this question directly,” Meka recalled. Six months later, the two researchers had figured out their strategy and just needed to iron out how to apply their method to the problem at hand.
To see how they arrived at their new upper limit, take any set of numbers between 1 and N. Call it A. The density of A is the percentage of the numbers between 1 and N that it includes. Since there are a lot of possible arithmetic progressions between 1 and N, if you don’t choose the elements of A carefully, any A with high density will likely contain lots of arithmetic progressions.
In their proof, Kelley and Meka imagined that A had few or no arithmetic progressions, and they attempted to trace out the consequences. If A was dense enough, they showed that an absence of progressions necessitated a level of structure within A that would inevitably result in a contradiction, meaning that A must, after all, contain at least one progression.
To understand that structure, they considered the set A + A, which consists of all the numbers made by adding two elements of A. They noticed that if A contains comparatively few arithmetic progressions, this implies a redundancy among the elements of A + A: Different pairs of numbers from A often add up to the same number.