He found a field rich with deep, unanswered questions that were mathematical at heart. One of his earliest groundbreaking efforts focused on a seeming contradiction: whether it was possible to convince someone else that a mathematical statement had been proved without showing how.
“The person who sees the proof doesn’t learn anything about the proof itself,” said Ran Raz, a computer scientist at Princeton University. In 1985, Shafi Goldwasser, Silvio Micali, and Charles Rackoff introduced this concept of zero-knowledge interactive proofs, demonstrating its use for a few statements. Wigderson, along with Micali and Oded Goldreich, later expounded on that idea, laying out the conditions showing that if a statement can be proved, it also has a zero-knowledge proof.
“This is a key result in cryptography; it’s extremely central,” Raz said. Using a zero-knowledge proof, someone can prove they correctly encrypted or signed a message using their secret key, without revealing any information about it. “Avi has some extremely important results in cryptography, and this may be the most important of them.”
But perhaps Wigderson’s most foundational result lies in another area: linking computational hardness to randomness. By the late 1970s, computer scientists had realized that for many hard problems, algorithms that employed randomness, also called probabilistic algorithms, could vastly outcompete their deterministic alternatives. In a 1977 proof, for example, Robert Solovay and Volker Strassen introduced a randomized algorithm that could determine whether a number is prime faster than the best deterministic algorithms of that time.
For some problems, probabilistic algorithms can point to deterministic ones. In the early 1980s, Wigderson worked with Richard Karp of the University of California, Berkeley, to connect the idea of randomness to problems considered computationally hard, which means no known deterministic algorithms can solve them in a reasonable amount of time. “We don’t know how to prove they’re hard,” Wigderson said. However, he and Karp found a randomized algorithm to a certain hard problem that they were later able to derandomize, effectively uncovering a deterministic algorithm for it. Around the same time, other researchers showed how computational hardness assumptions in cryptography problems could enable derandomization in general.
The unreasonable effectiveness of randomness led him to think about the nature of randomness itself. He, like other researchers at the time, questioned how necessary it was for efficient problem-solving and under what conditions it could be eliminated altogether. “Initially, it was not clear if this was only our own stupidity, that we cannot remove randomness,” he said. “But the larger question was whether randomness can always be eliminated efficiently or not.” He realized that the need for randomness was intimately tied to the computational difficulty of the problem.
For a 1994 paper, he and the computer scientist Noam Nisan illuminated that connection. They proved that if any natural hard problems exist, as most computer scientists suspect, then every efficient randomized algorithm can be replaced by an efficient deterministic one. “You can always eliminate randomness,” Wigderson said.