Repetition doesn’t always have to be humdrum. In mathematics, it is a powerful force, capable of generating bewildering complexity.
Even after decades of study, mathematicians find themselves unable to answer questions about the repeated execution of very simple rules — the most basic “dynamical systems.” But in trying to do so, they have uncovered deep connections between those rules and other seemingly distant areas of math.
For example, the Mandelbrot set, which I wrote about last month, is a map of how a family of functions — described by the equation f(x) = x2 + c — behaves as the value of c ranges over the so-called complex plane. (Unlike real numbers, which can be placed on a line, complex numbers have two components, which can be plotted on the x- and y-axes of a two-dimensional plane.)
No matter how much you zoom in on the Mandelbrot set, novel patterns always arise, without limit. “It’s completely mind-blowing to me, even now, that this very complex structure emerges from such simple rules,” said Matthew Baker of the Georgia Institute of Technology. “It’s one of the really surprising discoveries of the 20th century.”
The complexity of the Mandelbrot set emerges in part because it is defined in terms of numbers that are themselves, well, complex. But, perhaps surprisingly, that isn’t the whole story. Even when c is a straightforward real number like, say, –3/2, all sorts of strange phenomena can occur. Nobody knows what happens when you repeatedly apply the equation f(x) = x2 – 3/2, using each output as the next input in a process known as iteration. If you start iterating from x = 0 (the “critical point” of a quadratic equation), it’s unclear whether you will produce a sequence that eventually converges toward a repeating cycle of values, or one that continues to endlessly bounce around in a chaotic pattern.
For values of c smaller than –2 or bigger than 1/4, iteration quickly blows up to infinity. But within that interval, there are infinitely many values of c known to produce chaotic behavior, and infinitely many cases like –3/2, where “we don’t know what happens, even though it’s super concrete,” said Giulio Tiozzo of the University of Toronto.
But in the 1990s, the Stony Brook University mathematician Misha Lyubich, who figured prominently in my report on the Mandelbrot set, proved that in the interval between –2 and 1/4, the vast majority of values of c produce nice “hyperbolic” behavior. (The mathematicians Jacek Graczyk and Grzegorz Swiatek independently proved the result around the same time.) This means that the corresponding equations, when iterated, converge to a single value or to a repeating cycle of numbers.
A decade later, a trio of mathematicians showed that most values of c are hyperbolic not only for quadratic equations, but for any family of real polynomials (more general functions that combine variables raised to powers, like x7 + 3x4 + 5x2 + 1). And now one of them, Sebastian van Strien of Imperial College London, believes he has a proof of this property for an even broader class of equations called real analytic functions, which include sine, cosine and exponential functions. Van Strien hopes to announce the result in May. If it holds up after peer review, it will mark a major advance in the characterization of how real one-dimensional systems behave.
Unlikely Intersections and Entropy Bagels
There are infinitely many real quadratic equations that, when iterated from zero, are known to end up producing a repeating cycle of numbers. But if you restrict c to rational values — those that can be written as fractions — only three values eventually generate periodic sequences: 0, –1 and –2. “These dynamical systems are very, very special,” said Clayton Petsche of Oregon State University.
In a paper published last year, Petsche and Chatchai Noytaptim of the University of Waterloo proved that they are even more special than they appear at first glance. The mathematicians looked at “totally real” numbers, which are more restrictive than real numbers but less restrictive than rational ones.
If you plug a number into a polynomial and get an output of zero, that number is a solution to, or root of, the polynomial. For example, 2 is a root of f(x) = x2 – 4, f(x) = x3 – 10x2 + 31x – 30, and infinitely many other equations. Such polynomials can have roots that are real, or roots that are complex. (For instance, the roots of x2 + 1 are the square root of –1, written as i, and –i — both complex numbers.)