That’s not to say it’s easy work. It wasn’t until 1983 that the mathematician Hendrik Lenstra proved that the general problem was even solvable, providing the first algorithm that could do it. Lenstra thought about ILP geometrically. First, he turned the inequalities at the heart of ILP into a convex shape, such as any regular polygon. This shape represents the constraints of the individual problem you’re solving, whether it’s couch production or airline scheduling, so the shape’s interior corresponds to all possible values that could solve the inequalities, and thus the problem. Lenstra called this shape the convex body. The problem’s dimension influences the dimension of this shape: With two variables it takes the form of a flat polygon; in three dimensions it is a Platonic solid, and so on.
Lenstra next imagined all the integers as an infinite set of grid points, known in mathematics as a lattice. A two-dimensional version looks like a sea of dots, and in three dimensions it looks like the points where steel beams in a building connect. The dimension of the lattice also depends on the dimension of a given problem.
To solve a given ILP problem, Lenstra showed that you just look for where the possible solutions meet the set of integers: at the intersection of the convex body and the lattice. And he came up with an algorithm that could search this space exhaustively — but to be effective, it sometimes had to break the problem up into pieces of smaller dimensions, adding many steps to the runtime.
In the following years, several researchers explored how to make this algorithm run faster. In 1988, Ravi Kannan and László Lovász introduced a concept called the covering radius, borrowed from the study of error-correcting codes, to help the convex body and lattice intersect more efficiently. Roughly, the covering radius makes sure that the convex body always contains at least one integer point, no matter where you place it on the lattice. As a result, the size of the covering radius also determines how efficiently you can solve the ILP problem.
So it all came down to determining the size of the ideal covering radius. Unfortunately, this proved to be a hard problem on its own, and the best Kannan and Lovász could do was narrow down a possible value by searching for upper and lower bounds. They showed that the upper bound — the maximum size of the covering radius — scaled up linearly with the dimension. This was pretty fast, but not enough to significantly speed up the ILP runtime. Over the next 30 years, other researchers could only do slightly better.
What ultimately helped Reis and Rothvoss break through was an unrelated mathematical result that focused purely on lattices. In 2016, Oded Regev and Stephens-Davidowitz showed, in effect, how many lattice points could fit within a specific shape. Reis and Rothvoss applied this to other shapes, which allowed them to better estimate the number of lattice points contained in an ILP covering radius, lowering the upper bound. “The latest breakthrough came with the realization that you can actually do other kinds of shapes,” Regev said.
This new tightened upper bound was a vast improvement, allowing Reis and Rothvoss to achieve a dramatic speedup of the overall ILP algorithm. Their work brings the runtime to (log n)O(n), where n is the number of variables and O(n)means it scales linearly with n. (This expression is considered “almost” the same as the run time of the binary problem.)
“It’s a triumph at the intersection of math, computer science and geometry,” said Daniel Dadush of the national research institute CWI in the Netherlands, who helped pioneer the algorithm Reis and Rothvoss used to measure the ILP runtime.
For now, the new algorithm hasn’t actually been used to solve any logistical problems, since it would take too much work updating today’s programs to make use of it. But for Rothvoss, that’s beside the point. “It’s about the theoretical understanding of a problem that has fundamental applications,” he said.
As to whether the computational efficiency of ILP could be further improved, researchers are still hopeful that they’ll keep approaching the ideal runtime — but not anytime soon. “That would require a fundamentally new idea,” Vempala said.