Computer scientists often classify tasks based on whether they can be solved with an efficient algorithm, or whether they are instead “NP-hard” (meaning they can’t be efficiently solved as the size of the problem grows, so long as a widely believed but unproved conjecture about computational complexity is true). For instance, the traveling salesperson problem, which asks for the shortest path needed to visit every city in a network only once, is NP-hard. So is determining whether a graph — a collection of vertices connected by edges — can be labeled with at most three colors so that any two connected vertices have different colors.
It turns out that it’s NP-hard to find even an approximate solution to many of these tasks. Say you want to label the vertices of a graph with different colors in a way that satisfies some list of constraints. If it’s NP-hard to satisfy all of those constraints, what about trying to fulfill just 90% of them, or 75%, or 50%? Below some threshold, it might be possible to come up with an efficient algorithm, but above that threshold, the problem becomes NP-hard.
For decades, computer scientists have worked to nail down thresholds for different optimization problems of interest. But some questions evade this kind of description. While an algorithm might guarantee 80% of the best solution, it might be NP-hard to achieve 95% of the best solution, leaving unsettled the question of where exactly between 80% and 95% the problem tips into NP-hard territory.
The unique games conjecture, or UGC, offers a way to immediately pinpoint the answer. It makes a statement that at first seems more limited: that it is NP-hard to tell the difference between a graph for which you can satisfy almost all of a certain set of coloring constraints (say, more than 99%) and a graph for which you can satisfy barely any (say, less than 1%).
But shortly after the UGC was proposed in 2002, researchers showed that if the conjecture is true, then you can easily compute the hardness threshold for any constraint satisfaction problem. (This is because the UGC also implies that a known algorithm achieves the best possible approximation for all of these problems.) “It was precisely the linchpin for all these optimization problems,” O’Donnell said.
And so theoretical computer scientists set out to prove the UGC — a task that ultimately led some of them to discover spherical cubes.
Making Hard Problems Harder
In 2005, O’Donnell was working at Microsoft Research. He and two colleagues — Uriel Feige, now at the Weizmann Institute of Science, and Guy Kindler, now at the Hebrew University of Jerusalem — teamed up to tackle the UGC.
They had a vague idea of how they wanted to proceed. They would start with a question about graphs — one that’s very similar to the UGC. The so-called maximum cut (“max-cut”) problem asks, when given a graph, how to partition its vertices into two sets (or colors) so that the number of edges connecting those sets is as large as possible. (You can think of max cut as a question about the best way to color a graph with two colors, so that as few edges as possible connect vertices of the same color.)
If the UGC is true, it would imply that, given some random graph, an efficient approximation algorithm can only get within about 87% of the true max cut of that graph. Doing any better would be NP-hard.
Feige, Kindler and O’Donnell instead wanted to go in the opposite direction: They hoped to show that max cut is hard to approximate, and then use that to prove the UGC. Their plan relied on the strength of a technique called parallel repetition — a clever strategy that makes hard problems harder.
Say you know that it’s NP-hard to distinguish between a problem you can solve and one you can mostly solve. Parallel repetition enables you to build on that to show a much stronger hardness result: that it’s also NP-hard to distinguish between a problem you can solve and one you can barely solve at all. “This nonintuitive, deep phenomenon … is in the guts of a lot of computer science today,” Naor said.
But there’s a catch. Parallel repetition doesn’t always amplify a problem’s hardness as much as computer scientists want it to. In particular, there are aspects of the max-cut problem that “create a big headache for parallel repetition,” Regev said.
Feige, Kindler and O’Donnell focused on showing that parallel repetition could work despite the headaches. “This is a really complicated thing to analyze,” said Dana Moshkovitz, a theoretical computer scientist at the University of Texas, Austin. “But this seemed tantalizingly close. Parallel repetition seemed like it would [help make] this connection from max cut to unique games.”
As a warmup, the researchers tried to understand parallel repetition for a simple case of max cut, what Moshkovitz called “the stupidest max cut.” Consider an odd number of vertices connected by edges to form a circle, or “odd cycle.” You want to label each vertex with one of two colors so that no pair of adjacent vertices have the same color. In this case, that’s impossible: One edge will always connect vertices of the same color. You have to delete that edge, “breaking” the odd cycle, to get your graph to satisfy the problem’s constraint. Ultimately, you want to minimize the total fraction of edges you need to delete to properly color your graph.
Parallel repetition allows you to consider a high-dimensional version of this problem — one in which you have to break all the odd cycles that appear. Feige, Kindler and O’Donnell needed to show that as the number of dimensions gets very large, you have to delete a very large fraction of edges to break all the odd cycles. If that was true, it would mean parallel repetition could effectively amplify the hardness of this “stupid max-cut” problem.
That’s when the team discovered a curious coincidence: There was a geometric way to interpret whether parallel repetition would work the way they hoped it would. The secret lay in the surface area of cubical foams.
From Lemons to Lemonade
Their problem, rewritten in the language of foams, boiled down to showing that spherical cubes cannot exist — that it’s impossible to partition high-dimensional space along the integer lattice into cells with surface area much smaller than that of the cube. (A larger surface area corresponds to needing to delete more “bad” edges in the odd-cycles graph, as the computer scientists hoped to show.)
“We were like, oh, what a weird geometry problem, but that’s probably true, right?” O’Donnell said. “We really needed that to be the true answer.” But he, Feige and Kindler couldn’t prove it. So in 2007, they published a paper outlining how they planned to use this problem to help attack the UGC.
Soon enough, their hopes were dashed.
Ran Raz, a theoretical computer scientist at Princeton who had already proved several major results about parallel repetition, was intrigued by their paper. He decided to analyze parallel repetition for the odd-cycle problem, in part because of the connection to geometry that Feige, Kindler and O’Donnell had uncovered.
Raz didn’t start with the foam problem but attacked the computer science version of the question head-on. He showed that you can get away with deleting far fewer edges to break all the odd cycles in a graph. In other words, parallel repetition can’t sufficiently amplify the hardness of this max-cut problem. “The parameters that one gets from parallel repetition exactly, exactly fall short of giving this,” Moshkovitz said.
“Our plan to use parallel repetition to show the hardness of unique games didn’t even work in the simplest case,” O’Donnell said. “This kind of broke the whole plan.”
Although disappointing, Raz’s result also hinted at the existence of spherical cubes: shapes capable of tiling n-dimensional space with a surface area that scaled in proportion to the square root of n. “We were like, well, let’s make lemonade out of lemons and take this disappointing technical result about parallel repetition and discrete graphs, and turn it into a neat, interesting result in geometry,” O’Donnell said.
He and Kindler, in collaboration with the computer scientists Anup Rao and Avi Wigderson, pored over Raz’s proof until they’d learned its techniques well enough to translate them to the foam problem. In 2008, they showed that spherical cubes are indeed possible.
“It’s a nice way to reason about the problem,” said Mark Braverman of Princeton. “It’s beautiful.”
And it raised questions on the geometry side of the story. Because they’d used Raz’s work on parallel repetition to construct their tiling shape, Kindler, O’Donnell, Rao and Wigderson ended up with something ugly and Frankenstein-like. The tile was messy and full of indentations. In mathematical terms, it was not convex. While this worked for their purposes, the spherical cube lacked properties that mathematicians prefer — properties that make a shape more concrete, easier to define and study, and more suitable for potential applications.
“There’s something very unsatisfying about their construction,” Regev said. “It looks like an amoeba. It doesn’t look like what you would expect, a nice tiling body.”
That’s what he and Naor set out to find.
Breaking Free of the Cage
From the outset, Naor and Regev realized they’d have to start from scratch. “Partly because convex bodies are so restrictive, none of the previous techniques had any chance of working,” said Dor Minzer, a theoretical computer scientist at the Massachusetts Institute of Technology.
In fact, it was entirely plausible that convexity would be too restrictive — that a convex spherical cube simply does not exist.
But last month, Naor and Regev proved that you can partition n-dimensional space along integer coordinates with a convex shape whose surface area is pretty close to that of the sphere. And they did it entirely geometrically — returning the problem to its mathematical roots.
They first had to get around a major obstacle. The cubical foam problem requires that each tile be centered at integer coordinates. But in the integer lattice, there are very short distances between some points — and those short distances cause trouble.
Consider a point in the two-dimensional grid. It’s located 1 unit away from its nearest points in the horizontal and vertical directions. But in the diagonal direction, the nearest point is $latex sqrt{2}$ units away — a discrepancy that gets worse in larger spaces. In the n-dimensional integer lattice, the nearest points are still 1 unit away, but those “diagonal” points are now $latex sqrt{n}$ units away. In, say, 10,000 dimensions, this means that the closest “diagonal” neighbor to any point is 100 units away even though there are 10,000 other points (one in each direction) that are only 1 unit away.