When they study these problems, scientists like to envision them as games against an adversary. The adversary chooses a devilish sequence of requests to make the online algorithm perform as badly as possible compared to its offline counterpart. To rob the adversary of some of its power, researchers use algorithms that include random decisions.
This strategy is quite effective, and researchers have suspected since the early 1990s that you can always find a randomized algorithm that reaches a specific performance goal: a competitive ratio proportional to log k, where k is the number of agents. This is called the randomized k-server conjecture, and researchers have shown that it’s true for some spaces, or specific collections of points (the equivalent of houses that could call for plumbers). But it hasn’t been proved for all cases.
Like most researchers, Rabani and his co-authors — Sébastien Bubeck of Microsoft Research and Christian Coester of the University of Oxford — figured the conjecture was true. “I had no reason to doubt it,” Coester said.
But that started to change as they worked on another online problem. It had connections to the k-server problem, and the known lower limit on the competitive ratio was unexpectedly high. It made them think perhaps a goal as low as log k for the k-server problem was overly optimistic.
Rabani said it was Coester who first suggested that the randomized k-server conjecture might be false. “As soon as he said it, it all made sense.”
To disprove the conjecture, the authors played the adversary, creating a perfect storm that would prevent any online algorithm from reaching a competitive ratio of log k. Their strategy had two parts: They constructed a family of complex, fractal-like spaces and designed a distribution of request sequences that would be difficult for any possible algorithm. On the algorithm’s very first move, the structure of the space meant it had to choose between two identical paths, one of which would eventually require extra travel based on the requests. Then the authors used a method called recursion to build spaces that multiplied these decision points, forcing the algorithm into a morass of bad options and driving up the cost.
The choices reminded Rabani of the Robert Frost poem “The Road Not Taken,” in which a traveler contemplates two potential paths through a yellow wood. “We just apply the poem recursively,” he joked. “And then things go really bad.”
The authors showed that, in the spaces they had constructed, a randomized algorithm can never achieve a competitive ratio better than (log k)2, pushing a universal goal of log k forever out of reach. They had refuted the conjecture.
This work, which won a Best Paper Award at the 2023 Symposium on Theory of Computing, marks a “solidly theoretical” milestone, Gupta said. This kind of result helps indicate what kind of performance we can hope for from our algorithms. In practice, however, algorithm designers often aren’t planning around worst-case scenarios, with an omnipotent adversary and complete ignorance of the future. When algorithms are unleashed on real-world problems, they often exceed theoretical expectations.
The paper, which also proved cutoffs for randomized algorithms used for other problems, could also have implications for future work in the field. The result clearly “highlights the power” of the technique the authors used, Gupta said.
Perhaps those future findings will defy researchers’ expectations as this one did, Rabani said. “This is one of the cases where it feels really good to be wrong.”
Quanta is conducting a series of surveys to better serve our audience. Take our computer science reader survey and you will be entered to win free Quanta merchandise.