Uncategorized Archives - Science and Nerds https://scienceandnerds.com/category/uncategorized/ My WordPress Blog Mon, 26 Aug 2024 21:59:58 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 203433050 How Colorful Ribbon Diagrams Became the Face of Proteins https://scienceandnerds.com/2024/08/26/how-colorful-ribbon-diagrams-became-the-face-of-proteins/ https://scienceandnerds.com/2024/08/26/how-colorful-ribbon-diagrams-became-the-face-of-proteins/#respond Mon, 26 Aug 2024 21:59:55 +0000 https://scienceandnerds.com/2024/08/26/how-colorful-ribbon-diagrams-became-the-face-of-proteins/ Source:https://www.quantamagazine.org/how-colorful-ribbon-diagrams-became-the-face-of-proteins-20240823/#comments How Colorful Ribbon Diagrams Became the Face of Proteins 2024-08-26 21:59:55 “The ribbon-diagram representation was invaluable,” said Anastassis Perrakis, a structural biologist at the Netherlands Cancer Institute and Utrecht University. It helped scientists communicate, teach and classify protein structure, and it captured the imaginations of scientists and nonscientists alike. It was able to “convince […]

The post How Colorful Ribbon Diagrams Became the Face of Proteins appeared first on Science and Nerds.

]]>
Source:https://www.quantamagazine.org/how-colorful-ribbon-diagrams-became-the-face-of-proteins-20240823/#comments

How Colorful Ribbon Diagrams Became the Face of Proteins

2024-08-26 21:59:55

“The ribbon-diagram representation was invaluable,” said Anastassis Perrakis, a structural biologist at the Netherlands Cancer Institute and Utrecht University. It helped scientists communicate, teach and classify protein structure, and it captured the imaginations of scientists and nonscientists alike. It was able to “convince people how elegant [proteins] are and to see the complexity, without it being overwhelming,” Richardson said.

Today, ribbon diagrams are the ubiquitous face of proteins in scientific articles, textbooks and magazines, known for their particular combination of clarity and beauty. “It’s hard to imagine a scientific representation of data that is more meaningful,” said Philip Bourne, dean of the University of Virginia School of Data Science.

The diagrams have been so successful that it can be hard to remember that our cells are not, in fact, filled with colorful ribbons and broad arrows.

The Face of Proteins

Day in and day out, our cells are hard at work constructing different kinds of proteins. Proteins are made of strings of molecules called amino acids, and each amino acid has one or more side chains made up of several atoms “coming off it like a lollipop,” said Janet Thornton, a computational biologist who retired from the European Molecular Biology Laboratory last year. The amino acid backbone innately folds into a three-dimensional shape, known as a protein structure, which determines which other molecules the protein can bind to and, therefore, its function in a cell.

Once a structural biologist completed what used to be a years-long process of reconstructing a protein’s 3D structure, they faced a new problem: how to communicate that structure to other scientists. In truth, it’s impossibly difficult to represent a protein’s realistic structure. Proteins are minuscule, on the order of nanometers, and can contain hundreds of thousands of atoms. “If those atoms are all drawn and then joined together, it becomes very difficult to see,” Thornton said.

Richardson’s innovation was a reproducible method of representing the folds of a protein’s amino acid backbone without getting bogged down in the details of specific atomic arrangements. She relied on proteins’ tendency to fold into two energetically favorable shapes: coils called alpha helices and flat shapes called beta strands, which can line up into so-called beta sheets. Then there are loops, which connect alpha helices to beta strands like corner pieces in a puzzle.

There are other folding structures, and “people have come up with lots of names” for them, Perrakis said. “But at the end of the day, the ones that matter are the helices and the sheets.”




Uncategorized



Source:https://www.quantamagazine.org/how-colorful-ribbon-diagrams-became-the-face-of-proteins-20240823/#comments

The post How Colorful Ribbon Diagrams Became the Face of Proteins appeared first on Science and Nerds.

]]>
https://scienceandnerds.com/2024/08/26/how-colorful-ribbon-diagrams-became-the-face-of-proteins/feed/ 0 41452
Mathematicians Prove Hawking Wrong About the Most Extreme Black Holes https://scienceandnerds.com/2024/08/23/mathematicians-prove-hawking-wrong-about-the-most-extreme-black-holes/ https://scienceandnerds.com/2024/08/23/mathematicians-prove-hawking-wrong-about-the-most-extreme-black-holes/#respond Fri, 23 Aug 2024 21:58:41 +0000 https://scienceandnerds.com/2024/08/23/mathematicians-prove-hawking-wrong-about-the-most-extreme-black-holes/ Source:https://www.quantamagazine.org/mathematicians-prove-hawking-wrong-about-extremal-black-holes-20240821/#comments Mathematicians Prove Hawking Wrong About the Most Extreme Black Holes 2024-08-23 21:58:41 To understand the universe, scientists look to its outliers. “You always want to know about the extreme cases — the special cases that lie at the edge,” said Carsten Gundlach, a mathematical physicist at the University of Southampton. Black holes are the […]

The post Mathematicians Prove Hawking Wrong About the Most Extreme Black Holes appeared first on Science and Nerds.

]]>
Source:https://www.quantamagazine.org/mathematicians-prove-hawking-wrong-about-extremal-black-holes-20240821/#comments

Mathematicians Prove Hawking Wrong About the Most Extreme Black Holes

2024-08-23 21:58:41

To understand the universe, scientists look to its outliers. “You always want to know about the extreme cases — the special cases that lie at the edge,” said Carsten Gundlach, a mathematical physicist at the University of Southampton.

Black holes are the enigmatic extremes of the cosmos. Within them, matter is packed so tightly that, according to Einstein’s general theory of relativity, nothing can escape. For decades, physicists and mathematicians have used them to probe the limits of their ideas about gravity, space and time.

But even black holes have edge cases — and those cases have their own insights to give. Black holes rotate in space. As matter falls into them, they start to spin faster; if that matter has charge, they also become electrically charged. In principle, a black hole can reach a point where it has as much charge or spin as it possibly can, given its mass. Such a black hole is called “extremal” — the extreme of the extremes.

These black holes have some bizarre properties. In particular, the so-called surface gravity at the boundary, or event horizon, of such a black hole is zero. “It is a black hole whose surface doesn’t attract things anymore,” Gundlach said. But if you were to nudge a particle slightly toward the black hole’s center, it would be unable to escape.

In 1973, the prominent physicists Stephen Hawking, John Bardeen and Brandon Carter asserted that extremal black holes can’t exist in the real world — that there is simply no plausible way that they can form. Nevertheless, for the past 50 years, extremal black holes have served as useful models in theoretical physics. “They have nice symmetries that make it easier to calculate things,” said Gaurav Khanna of the University of Rhode Island, and this allows physicists to test theories about the mysterious relationship between quantum mechanics and gravity.

Now two mathematicians have proved Hawking and his colleagues wrong. The new work — contained in a pair of recent papers by Christoph Kehle of the Massachusetts Institute of Technology and Ryan Unger of Stanford University and the University of California, Berkeley — demonstrates that there is nothing in our known laws of physics to prevent the formation of an extremal black hole.

Their mathematical proof is “beautiful, technically innovative and physically surprising,” said Mihalis Dafermos, a mathematician at Princeton University (and Kehle’s and Unger’s doctoral adviser). It hints at a potentially richer and more varied universe in which “extremal black holes could be out there astrophysically,” he added.

That doesn’t mean they are. “Just because a mathematical solution exists that has nice properties doesn’t necessarily mean that nature will make use of it,” Khanna said. “But if we somehow find one, that would really [make] us think about what we are missing.” Such a discovery, he noted, has the potential to raise “some pretty radical kinds of questions.”

The Law of Impossibility

Before Kehle and Unger’s proof, there was good reason to believe that extremal black holes couldn’t exist.

In 1973, Bardeen, Carter and Hawking introduced four laws about the behavior of black holes. They resembled the four long-established laws of thermodynamics — a set of sacrosanct principles that state, for instance, that the universe becomes more disordered over time, and that energy cannot be created or destroyed.

In their paper, the physicists proved their first three laws of black hole thermodynamics: the zeroth, first and second. By extension, they assumed that the third law (like its standard thermodynamics counterpart) would also be true, even though they were not yet able to prove it.

That law stated that the surface gravity of a black hole cannot decrease to zero in a finite amount of time — in other words, that there is no way to create an extremal black hole. To support their claim, the trio argued that any process that would allow a black hole’s charge or spin to reach the extremal limit could also potentially result in its event horizon disappearing altogether. It is widely believed that black holes without an event horizon, called naked singularities, cannot exist. Moreover, because a black hole’s temperature is known to be proportional to its surface gravity, a black hole with no surface gravity would also have no temperature. Such a black hole would not emit thermal radiation — something that Hawking later proposed black holes had to do.

In 1986, a physicist named Werner Israel seemed to put the issue to rest when he published a proof of the third law. Say you want to create an extremal black hole from a regular one. You might try to do so by making it spin faster or by adding more charged particles. Israel’s proof seemed to demonstrate that doing so could not force a black hole’s surface gravity to drop to zero in a finite amount of time.

As Kehle and Unger would ultimately discover, Israel’s argument concealed a flaw.

Death of the Third Law

Kehle and Unger did not set out to find extremal black holes. They stumbled on them entirely by accident.

They were studying the formation of electrically charged black holes. “We realized that we could do it” — make a black hole — “for all charge-to-mass ratios,” Kehle said. That included the case where the charge is as high as possible, a hallmark of an extremal black hole.

Dafermos recognized that his former students had uncovered a counterexample to Bardeen, Carter and Hawking’s third law: They’d shown that they could indeed change a typical black hole into an extremal one within a finite stretch of time.

Kehle and Unger started with a black hole that doesn’t rotate and has no charge, and modeled what might happen if it was placed in a simplified environment called a scalar field, which assumes a background of uniformly charged particles. They then buffeted the black hole with pulses from the field to add charge to it.

These pulses also contributed electromagnetic energy to the black hole, which added to its mass. By sending diffuse, low-frequency pulses, the mathematicians realized that they could increase the black hole’s charge faster than its mass — precisely what they needed to complete their proof.

After discussing their result with Dafermos, they pored over Israel’s 1986 proof and identified his error. They also constructed two other solutions to Einstein’s equations of general relativity that involved different ways of adding charge to a black hole. Having disproved Bardeen, Carter and Hawking’s hypothesis in three different contexts, the work should leave no doubt, Unger said: “The third law is dead.”

The pair also showed that the formation of an extremal black hole would not open the door to a naked singularity, as physicists had feared. Instead, extremal black holes seem to lie on a critical threshold: Add the right amount of charge to a dense cloud of charged matter, and it will collapse to form an extremal black hole. Add more than that, and rather than collapse into a naked singularity, the cloud will disperse. No black hole will form at all. Kehle and Unger are just as excited by this result as they are by their proof that extremal black holes can exist.

“This is a beautiful example of math giving back to physics,” said Elena Giorgi, a mathematician at Columbia University.

The Impossible Made Visible

While Kehle and Unger proved that it’s theoretically possible for extremal black holes to exist in nature, there’s no guarantee that they do.

For one thing, the theoretical examples possess maximal charge. But black holes with a discernible charge have never been observed. It’s far more likely to see a black hole that’s quickly rotating. Kehle and Unger want to construct an example that reaches the extremal threshold for spin, rather than charge.

But working with spin is much more mathematically challenging. “You need a lot of new math, and new ideas, to do that,” Unger said. He and Kehle are just starting to investigate the problem.

In the meantime, a better understanding of extremal black holes can provide further insights into near-extremal black holes, which are thought to be plentiful in the universe. “Einstein didn’t think that black holes could be real [because] they’re just too weird,” Khanna said. “But now we know the universe is teeming with black holes.”

For similar reasons, he added, “we shouldn’t give up on extremal black holes. I just don’t want to put limits on nature’s creativity.”




Uncategorized



Source:https://www.quantamagazine.org/mathematicians-prove-hawking-wrong-about-extremal-black-holes-20240821/#comments

The post Mathematicians Prove Hawking Wrong About the Most Extreme Black Holes appeared first on Science and Nerds.

]]>
https://scienceandnerds.com/2024/08/23/mathematicians-prove-hawking-wrong-about-the-most-extreme-black-holes/feed/ 0 41440
Diminishing Dark Energy May Evade the ‘Swampland’ of Impossible Universes https://scienceandnerds.com/2024/08/21/diminishing-dark-energy-may-evade-the-swampland-of-impossible-universes/ https://scienceandnerds.com/2024/08/21/diminishing-dark-energy-may-evade-the-swampland-of-impossible-universes/#respond Wed, 21 Aug 2024 21:58:23 +0000 https://scienceandnerds.com/2024/08/21/diminishing-dark-energy-may-evade-the-swampland-of-impossible-universes/ Source:https://www.quantamagazine.org/waning-dark-energy-may-evade-swampland-of-impossible-universes-20240819/#comments Diminishing Dark Energy May Evade the ‘Swampland’ of Impossible Universes 2024-08-21 21:58:23 At the time, the work seemed to conflict with what physicists thought they knew about the universe, with its presumed cosmological constant. And some string theorists still argue that stable universes with positive cosmological constants can exist in string theory; one attempt […]

The post Diminishing Dark Energy May Evade the ‘Swampland’ of Impossible Universes appeared first on Science and Nerds.

]]>
Source:https://www.quantamagazine.org/waning-dark-energy-may-evade-swampland-of-impossible-universes-20240819/#comments

Diminishing Dark Energy May Evade the ‘Swampland’ of Impossible Universes

2024-08-21 21:58:23

At the time, the work seemed to conflict with what physicists thought they knew about the universe, with its presumed cosmological constant. And some string theorists still argue that stable universes with positive cosmological constants can exist in string theory; one attempt at constructing such a solution appeared earlier this month. Nevertheless, in light of DESI’s “hint” that the dark energy density is indeed dropping, the conjecture from Vafa and his co-authors now looks prescient.

Trans-Planckian Censorship

There’s another thought experiment that calls into question the very idea of an eternally unchanging cosmological constant.

Let’s say that the expansion of the universe accelerates at a constant rate. Eventually, the expansion gets so fast that even really small things instantly blow up.

In particular, we can consider the smallest distance imaginable — the “Planck length,” which is thought to be the smallest spatial increment that’s observable even in principle. The opposite end of the cosmic ruler is the Hubble horizon, the distance to the edge of the observable universe, beyond which we cannot see. In a universe with a positive cosmological constant, it’s only a matter of time — a lot of time — before the Planck length should grow to the size of the Hubble horizon. According to the trans-Planckian censorship conjecture (TCC), formulated by Bedroya and Vafa in 2019 and updated last year, this should never happen.

It’s a commonsense kind of statement, Bedroya said. “Something that is sub-Planckian” — and hence fundamentally unobservable — “shouldn’t get to the scale where we can observe it.” Nor should it get to a scale that is too big for us to observe it. “The smallest length scale that makes sense in your theory,” he said, “shouldn’t stretch out to be bigger than the biggest length scales that make sense in your theory.”

Assuming that dark energy is the cosmological constant, a straightforward calculation can tell you how long cosmic expansion would have to keep accelerating before the Planck length would start stretching beyond the Hubble length. That amount of time, Bedroya and Vafa computed, is at most 2 trillion years. Dark energy could not behave like a cosmological constant beyond that point.

The TCC conjecture allows for two different scenarios. Dark energy could experience a slow, steady decline. The other possibility is that dark energy holds steady for a while, like a ball stuck in a dip or valley, midway down the hill. But in an instant — occurring somewhere between 10 billion and 1 trillion or so years from now — that will change. In a process called quantum tunneling, the ball will blast through the mound that has been confining it and immediately start rolling down the hill.

If DESI’s preliminary result is confirmed, that would align with option 1.

For Steinhardt, the allure of the swampland endeavor is how it draws on different ideas to make testable predictions about how the universe must be. We’re now entering the regime of measurement, he noted, “where we can actually test these predictions. If they are correct, we should see clear signs of time-varying dark energy very soon.”

‘Biggest Problem in the Universe’

Will DESI’s hint hold up? The team has already collected two more years’ worth of galaxy location data, and an analysis of this larger data set, according to Palanque-Delabrouille, could be out by spring.

Until then, many cosmologists are reserving judgment. “Based on the evidence that we have now, I am not willing to discard the [constant dark energy] model just yet,” said Licia Verde, a cosmologist at the University of Barcelona and a DESI member.

But if the initial DESI finding is confirmed, it will tell us something crucial about dark energy and its future. “Even more importantly,” Vafa said, “we can deduce that this is marking the beginning of the end of the universe. By ‘end,’ I don’t mean nothing happens after that. I’m saying something else happens that is very different from what we have now.” Perhaps dark energy will fall until it settles into a stabler, possibly negative value. With that, a new universe, with new laws, particles and forces, would replace the current one.

“We’re desperate for a clue,” Caldwell said. Dark energy is “literally the biggest problem in the universe.”




Uncategorized



Source:https://www.quantamagazine.org/waning-dark-energy-may-evade-swampland-of-impossible-universes-20240819/#comments

The post Diminishing Dark Energy May Evade the ‘Swampland’ of Impossible Universes appeared first on Science and Nerds.

]]>
https://scienceandnerds.com/2024/08/21/diminishing-dark-energy-may-evade-the-swampland-of-impossible-universes/feed/ 0 41411
The Viral Paleontologist Who Unearths Pathogens’ Deep Histories https://scienceandnerds.com/2024/08/19/the-viral-paleontologist-who-unearths-pathogens-deep-histories/ https://scienceandnerds.com/2024/08/19/the-viral-paleontologist-who-unearths-pathogens-deep-histories/#respond Mon, 19 Aug 2024 21:58:25 +0000 https://scienceandnerds.com/2024/08/19/the-viral-paleontologist-who-unearths-pathogens-deep-histories/ Source:https://www.quantamagazine.org/the-viral-paleontologist-who-unearths-pathogens-deep-histories-20240816/#comments The Viral Paleontologist Who Unearths Pathogens’ Deep Histories 2024-08-19 21:58:25 You aren’t worried about catching a virus from your samples? Not at all. Everything inside is dead. Every biological process has been disrupted by formalin. This is why the preparation is so amenable to preserving viral RNA: You put a complete stop to every […]

The post The Viral Paleontologist Who Unearths Pathogens’ Deep Histories appeared first on Science and Nerds.

]]>
Source:https://www.quantamagazine.org/the-viral-paleontologist-who-unearths-pathogens-deep-histories-20240816/#comments

The Viral Paleontologist Who Unearths Pathogens’ Deep Histories

2024-08-19 21:58:25

You aren’t worried about catching a virus from your samples?

Not at all. Everything inside is dead. Every biological process has been disrupted by formalin. This is why the preparation is so amenable to preserving viral RNA: You put a complete stop to every enzymatic process, including the degradation of RNA. But humans carry respiratory pathogens all the time. When we go to study the 1918 flu, we don’t want to contaminate our samples with contemporary flu.

So, you are wearing protective equipment. The jar is open. What’s next?

In most collections, the curators insist that we don’t remove the specimens from the jars. So we typically work with tools, like scissors, which have specific shapes so that we can cut a small piece of the tissue while leaving the specimen in the jar. We put these samples into small tubes and bring them back to the lab. Then we boil them. It’s a bit counterintuitive, but it helps release more nucleic acid fragments. Exposure to formalin induces cross-links between DNA and RNA and other big molecules, so they become sort of locked to other molecules. But these links are reversible through heat. After boiling the sample for 15 minutes, we mash it and separate the nucleic acid from all other molecules. Then we take the nucleic acids and prepare them for sequencing.

Before you found the 1918 Spanish flu genome, the prevailing hypothesis was that our seasonal H1N1 flu virus formed when genomic segments from different influenza viruses were exchanged between strains. You found something different.

These viruses have segmented genomes, and sometimes they swap parts of their genome. When there is a co-infection [of multiple viruses] in a host, sometimes different segments will be packaged together. It was previously thought that probably one of the segments had been swapped between the 1918 pandemic virus and the seasonal flu. And what our genomic sequences suggest is that, actually, no, it did not happen. We showed that there was this accelerated rate of evolution for the pandemic lineage that led to seasonal flu.

We have no explanation for why it happened. But when you take that acceleration into account, the evolutionary history we infer becomes compatible with the idea that the eight segments of the pandemic virus would be the progenitors of the eight segments of the seasonal flu. It’s the same lineage.

Did the virus’s evolution accelerate because of the high number of people infected during the pandemic?

It doesn’t mean you will see such an acceleration — something else must be going on. For example, we think that some SARS-CoV-2 variants that have also clearly experienced accelerated rates of evolution may have evolved in specific reservoirs within the human population, like immunosuppressed people. And the more such cases there are, the more opportunities there are for unlikely mutations. It seems that very rare events can have a disproportionate weight in pandemics.

How can your findings about the accelerated rate of viral evolution during the Spanish flu pandemic help us better prepare for future outbreaks?

Evolution never repeats itself, but sometimes it takes similar paths to similar effects. These findings give us a better sense of what the possibilities are. That accelerated evolution of 1918 flu is reminiscent of what we saw with SARS-CoV-2. And maybe that’s something that we should always keep in mind when we study pandemic events.

On the other hand, the measles virus appears to have changed more slowly than previously believed. When you added the 1912 measles genome into an evolutionary model, you discovered that this virus may have emerged more than 1,000 years earlier than anyone thought. Why was this old RNA such a game changer?

We wanted to estimate the age of divergence from its closest relative, rinderpest, a virus that infected cattle and was eradicated through vaccination in 2011. But the 40 to 50 years of genome sampling that we’ve previously had for measles was a very, very shallow period. It’s not reasonable to extrapolate from such a short period to a period that is much, much longer. If you do that, you will systematically underestimate the age of ancient events.

Why is that?

Because viral genomes are really small, changes [genetic mutations] can happen multiple times at the same position. It’s like a self-erasing process, which makes statistical modeling more difficult. And then there’s the fact that substitution rates [frequency of genetic mutations] themselves might not be homogeneous through time: They can accelerate and slow down. That’s why one solution is to sample deeper through time. Now, instead of the oldest measles genome being from the ’60s or ’70s, we have one from the 1910s. When we’ve inputted this ancient genome into a statistical model, we’ve ended up showing that the divergence with rinderpest happened about 2,500 years ago.

Was there anything significant about this specific date?

It coincided almost exactly with the time when very large cities started to pop up in different places around the world. Measles cannot persist unless a critical population size of about a quarter to a half-million people has been reached. It constantly needs new susceptible individuals. It’s a virus that’s very immunogenic — if you get it once, your immune system can block any further infection. That’s why vaccination works so well.

You recently pitted the 1912 virus against antibodies induced by the current measles vaccine. What happened?

We synthesized the measles surface protein based on this ancient genome and checked whether it would be recognized by the antibodies of recently vaccinated people. And it was. It extended the shelf life of the measles vaccine to 100 years, which means it’s probably not a priority to develop new measles vaccines.

So the measles virus evolves very slowly?

Actually, the virus changes quite fast, but it’s evolving in a sort of evolutionary cage: It cannot go beyond it without losing fitness. I’m not saying there will never be a vaccine escape. I’m saying it seems like a very unlikely event. But the more we let measles circulate in human populations, the more chances it has. Eradication is the only safe way to go. Now our study shows we have a tool — the vaccine — that is almost perfect. And if you have the perfect tool to eradicate such a dangerous disease, why would you not use it?




Uncategorized



Source:https://www.quantamagazine.org/the-viral-paleontologist-who-unearths-pathogens-deep-histories-20240816/#comments

The post The Viral Paleontologist Who Unearths Pathogens’ Deep Histories appeared first on Science and Nerds.

]]>
https://scienceandnerds.com/2024/08/19/the-viral-paleontologist-who-unearths-pathogens-deep-histories/feed/ 0 41384
The Webb Telescope Further Deepens the Biggest Controversy in Cosmology https://scienceandnerds.com/2024/08/16/the-webb-telescope-further-deepens-the-biggest-controversy-in-cosmology/ https://scienceandnerds.com/2024/08/16/the-webb-telescope-further-deepens-the-biggest-controversy-in-cosmology/#respond Fri, 16 Aug 2024 21:58:04 +0000 https://scienceandnerds.com/2024/08/16/the-webb-telescope-further-deepens-the-biggest-controversy-in-cosmology/ Source:https://www.quantamagazine.org/the-webb-telescope-further-deepens-the-biggest-controversy-in-cosmology-20240813/#comments The Webb Telescope Further Deepens the Biggest Controversy in Cosmology 2024-08-16 21:58:04 When they compared their new numbers to the distances calculated from Hubble telescope data, “we saw phenomenal agreement,” said Gagandeep Anand, a member of the team based at the Space Telescope Science Institute. “That tells us, basically, that the work that has […]

The post The Webb Telescope Further Deepens the Biggest Controversy in Cosmology appeared first on Science and Nerds.

]]>
Source:https://www.quantamagazine.org/the-webb-telescope-further-deepens-the-biggest-controversy-in-cosmology-20240813/#comments

The Webb Telescope Further Deepens the Biggest Controversy in Cosmology

2024-08-16 21:58:04

When they compared their new numbers to the distances calculated from Hubble telescope data, “we saw phenomenal agreement,” said Gagandeep Anand, a member of the team based at the Space Telescope Science Institute. “That tells us, basically, that the work that has been done with Hubble is still good.”

Their latest results with Webb reaffirm the H0 value that they measured with Hubble a few years ago: 73.0, give or take 1.0 km/s/Mpc.

Given the crowding concern, though, Freedman had already turned to alternative stars that could serve as distance indicators. These are found in the outskirts of galaxies, far from the madding crowd.

One type is “tip-of-the-red-giant-branch,” or TRGB, stars. A red giant is an elderly star with a puffed-up atmosphere that glows brightly in red light. As it ages, a red giant will eventually ignite the helium in its core. At that moment, both the star’s temperature and its brightness suddenly drop off, said Kristen McQuinn, an astronomer at the Space Telescope Science Institute who led a Webb telescope project to calibrate distance measurements with TRGBs.

A typical galaxy has many red giants. If you plot the brightness of these stars against their temperatures, you’ll see the point at which their brightness drops off. The population of stars right before the drop is a good distance indicator, because in every galaxy, that population will have a similar spread of luminosities. By comparing the observed brightness of these stellar populations, astronomers can estimate relative distances.

(With any method, the physicists must deduce the absolute distance of at least one “anchor” galaxy in order to calibrate the whole scale. For their anchor, Riess, Freedman and other groups use an unusual nearby galaxy whose absolute distance has been determined geometrically through a parallax-like effect.)

Using TRGBs as distance indicators is more complex than using Cepheids, however. McQuinn and her colleagues used nine of the Webb telescope’s wavelength filters to understand precisely how their brightness depends on their color.

Astronomers are also beginning to turn to a new distance indicator: carbon-rich giant stars that belong to what’s called the J-region asymptotic giant branch (JAGB). These stars also sit away from a galaxy’s bright disk and emit a lot of infrared light. The technology to observe them at great distances wasn’t adequate until the Webb era, said Freedman’s graduate student Abigail Lee.

Freedman and her team applied for Webb telescope time to observe TRGBs and JAGBs along with the more established distance indicators, the Cepheids, in 11 galaxies. “I am a strong proponent of different methods,” she said.

An Evaporating Solution

On March 13, 2024, Freedman, Lee and the rest of their team sat around a table in Chicago to reveal what they had been hiding from themselves. Over the previous months, they had split into three groups. Each was tasked with measuring the distance to the 11 galaxies in their study using one of three methods: Cepheids, TRGBs or JAGBs. The galaxies also hosted the relevant kinds of supernovas, so their distances could calibrate the distances of supernovas in many more galaxies farther away. How fast these farther galaxies are receding from us (which is easily read off from their color) divided by their distances gives H0.

The three groups had calculated their distance measurements with a unique random offset added to the data. When they met in person, they removed each of the offsets and compared the results.

All three methods gave similar distances, within 3% uncertainty. It was “sort of jaw-dropping,” Freedman said. The team calculated three H0 values, one for each distance indicator. All came within range of the theoretical prediction of 67.4.

At that moment, they appeared to have erased the Hubble tension. But when they dug into the analysis to write up the results, they found problems.

The JAGB analysis was fine, but the other two were off. The team noticed that there were large error bars on the TRGB measurement. They tried to shrink them by including more TGRBs. But when they did so, they found that the distance to the galaxies was smaller than they first thought. The change yielded a larger H0 value.

In the Cepheid analysis, Freedman’s team uncovered an error: In about half the Cepheids, the correction for crowding had been applied twice. Fixing that significantly increased the resulting H0 value. It “brought us more into agreement with Adam [Riess], which ought to make him a little happier,” Freedman said. The Hubble tension was resurrected.




Uncategorized



Source:https://www.quantamagazine.org/the-webb-telescope-further-deepens-the-biggest-controversy-in-cosmology-20240813/#comments

The post The Webb Telescope Further Deepens the Biggest Controversy in Cosmology appeared first on Science and Nerds.

]]>
https://scienceandnerds.com/2024/08/16/the-webb-telescope-further-deepens-the-biggest-controversy-in-cosmology/feed/ 0 41375
The Geometric Tool That Solved Einstein’s Relativity Problem https://scienceandnerds.com/2024/08/13/the-geometric-tool-that-solved-einsteins-relativity-problem/ https://scienceandnerds.com/2024/08/13/the-geometric-tool-that-solved-einsteins-relativity-problem/#respond Tue, 13 Aug 2024 21:58:25 +0000 https://scienceandnerds.com/2024/08/13/the-geometric-tool-that-solved-einsteins-relativity-problem/ Source:https://www.quantamagazine.org/the-geometric-tool-that-solved-einsteins-relativity-problem-20240812/#comments The Geometric Tool That Solved Einstein’s Relativity Problem 2024-08-13 21:58:25 After Albert Einstein published his special theory of relativity in 1905, he spent the next decade trying to come up with a theory of gravity. But for years, he kept running up against a problem. He wanted to show that gravity is really a […]

The post The Geometric Tool That Solved Einstein’s Relativity Problem appeared first on Science and Nerds.

]]>
Source:https://www.quantamagazine.org/the-geometric-tool-that-solved-einsteins-relativity-problem-20240812/#comments

The Geometric Tool That Solved Einstein’s Relativity Problem

2024-08-13 21:58:25

After Albert Einstein published his special theory of relativity in 1905, he spent the next decade trying to come up with a theory of gravity. But for years, he kept running up against a problem.

He wanted to show that gravity is really a warping of the geometry of space-time caused by the presence of matter. But he also knew that time and distance are counterintuitively relative: They change depending on your frame of reference. Moving quickly makes distances shrink and time slow down. How, then, might you describe gravity objectively, regardless of whether you’re stationary or moving?

Einstein found the solution in a new geometric theory published a few years earlier by the Italian mathematicians Gregorio Ricci-Curbastro and Tullio Levi-Civita. In this theory lay the mathematical foundation for what would later be dubbed a “tensor.”

Since then, tensors have become instrumental not just in Einstein’s general theory of relativity, but also in machine learning, quantum mechanics and even biology. “Tensors are the most efficient packaging device we have to organize our equations,” said Dionysios Anninos, a theoretical physicist at King’s College London. “They’re the natural language for geometric objects.”

They’re also tough to define. Talk to a computer scientist, and they might tell you that a tensor is an array of numbers that stores important data. A single number is a “rank 0” tensor. A list of numbers, called a vector, is a rank 1 tensor. A grid of numbers, or matrix, is a rank 2 tensor. And so on.

But talk to a physicist or mathematician, and they’ll find this definition wanting. To them, though tensors can be represented by such arrays of numbers, they have a deeper geometric meaning.

To understand the geometric notion of a tensor, start with vectors. You can think of a vector as an arrow floating in space — it has a length and a direction. (This arrow does not need to be anchored to a particular point: If you move it around in space, it remains the same vector.) A vector might represent the velocity of a particle, for example, with the length denoting its speed and the direction its bearing.

This information gets packaged in a list of numbers. For instance, a vector in two-dimensional space is defined by a pair of numbers. The first tells you how many units the arrow stretches to the right or left, and the second tells you how far it stretches up or down.

But these numbers depend on how you’ve defined your coordinate system. Say you change your coordinate system:

You now express the vector in terms of how far it stretches in each direction of the new coordinate system. This gives you a different pair of numbers. But the vector itself has not changed: Its length and orientation remain the same, no matter what coordinate system you’re in. Moreover, if you know how to move from one coordinate system to another, you’ll also automatically know how your list of numbers should change.

Tensors generalize these ideas. A vector is a rank 1 tensor; tensors of higher rank contain more complicated geometric information.

For example, imagine you have a block of steel, and you want to describe all the forces that can be exerted on it. A rank 2 tensor — written as a matrix — can do this. Each of the faces of the block feels forces in three different directions. (For instance, the right face of the block can experience forces in the up-down direction, the left-right direction, and the forward-back direction.)

The tensor that encapsulates all these forces can therefore be represented by a matrix of nine numbers — one number for each direction for each of three faces. (Opposite faces, in this example, are considered redundant.)

Mathematicians often conceive of tensors as functions that take one or more vectors as inputs and produce another vector, or a number, as an output. This output doesn’t depend on the choice of coordinate system. (That constraint is what makes tensors distinct from functions more generally.) A tensor might, for instance, take in two vectors that form the edges of a rectangle, and output the rectangle’s area. If you rotate the rectangle, its length along the x-axis and height along the y-axis will change. But its area won’t.




Uncategorized



Source:https://www.quantamagazine.org/the-geometric-tool-that-solved-einsteins-relativity-problem-20240812/#comments

The post The Geometric Tool That Solved Einstein’s Relativity Problem appeared first on Science and Nerds.

]]>
https://scienceandnerds.com/2024/08/13/the-geometric-tool-that-solved-einsteins-relativity-problem/feed/ 0 41327
How Base 3 Computing Beats Binary https://scienceandnerds.com/2024/08/12/how-base-3-computing-beats-binary/ https://scienceandnerds.com/2024/08/12/how-base-3-computing-beats-binary/#respond Mon, 12 Aug 2024 21:58:04 +0000 https://scienceandnerds.com/2024/08/12/how-base-3-computing-beats-binary/ Source:https://www.quantamagazine.org/how-base-3-computing-beats-binary-20240809/#comments How Base 3 Computing Beats Binary 2024-08-12 21:58:04 The hallmark feature of ternary notation is that it’s ruthlessly efficient. With two binary bits, you can represent four numbers. Two “trits” — each with three different states — allow you to represent nine different numbers. A number that requires 42 bits would need only 27 […]

The post How Base 3 Computing Beats Binary appeared first on Science and Nerds.

]]>
Source:https://www.quantamagazine.org/how-base-3-computing-beats-binary-20240809/#comments

How Base 3 Computing Beats Binary

2024-08-12 21:58:04

The hallmark feature of ternary notation is that it’s ruthlessly efficient. With two binary bits, you can represent four numbers. Two “trits” — each with three different states — allow you to represent nine different numbers. A number that requires 42 bits would need only 27 trits.

If a three-state system is so efficient, you might imagine that a four-state or five-state system would be even more so. But the more digits you require, the more space you’ll need. It turns out that ternary is the most economical of all possible integer bases for representing big numbers.

To see why, consider an important metric that tallies up how much room a system will need to store data. You start with the base of the number system, which is called the radix, and multiply it by the number of digits needed to represent some large number in that radix. For example, the number 100,000 in base 10 requires six digits. Its “radix economy” is therefore 10 × 6 = 60. In base 2, the same number requires 17 digits, so its radix economy is 2 × 17 = 34. And in base 3, it requires 11 digits, so its radix economy is 3 × 11 = 33. For large numbers, base 3 has a lower radix economy than any other integer base. (Surprisingly, if you allow a base to be any real number, and not just an integer, then the most efficient computational base is the irrational number e.)

In addition to its numerical efficiency, base 3 offers computational advantages. It suggests a way to reduce the number of queries needed to answer questions with more than two possible answers. A binary logic system can only answer “yes” or “no.” So if you’re comparing two numbers, x and y, to find out which is larger, you might first ask the computer “Is x less than y?” If the answer is no, you need a second query: “Is x equal to y?” If the answer is yes, then they’re equal; if the answer is no, then y is less than x.

A system using ternary logic can give one of three answers. Because of this, it requires only one query: “Is x less than, equal to, or greater than y?”

Despite its natural advantages, base 3 computing never took off, even though many mathematicians marveled at its efficiency. In 1840, an English printer, inventor, banker and self-taught mathematician named Thomas Fowler invented a ternary computing machine to calculate weighted values of taxes and interest. “After that, very little was done for years,” said Bertrand Cambou, an applied physicist at Northern Arizona University.

In 1950, at the dawn of the digital age, a book-length report on the computing technology of the time suggested a computing advantage for ternary. And in the fall of 1958, visitors to the Soviet Union reported that engineers there had been developing a ternary computer called Setun — the first modern computer based on ternary logic and hardware. Soviet scientists built dozens of Setun computers.

Why didn’t ternary computing catch on? The primary reason was convention. Even though Soviet scientists were building ternary devices, the rest of the world focused on developing hardware and software based on switching circuits — the foundation of binary computing. Binary was easier to implement.

But the past few years have brought flashes of progress. Engineers have proposed ways to build ternary logical systems even on binary-based hardware. And Cambou, with the support of the U.S. military, has been developing cybersecurity systems that use base 3 computing. In papers published in 2018 and 2019, he and his collaborators rigorously described a ternary-based system that could replace the existing public key infrastructure, which includes the digital keys needed to encrypt or decrypt cybercommunications. Switching from bits to trits, he said, significantly reduces the error rate, because ternary states better manage erratic information.

Schoolhouse Rock! turned out to be prophetic. “The past and the present and the future,” the cartoon characters sang. “You get three as a magic number.”




Uncategorized



Source:https://www.quantamagazine.org/how-base-3-computing-beats-binary-20240809/#comments

The post How Base 3 Computing Beats Binary appeared first on Science and Nerds.

]]>
https://scienceandnerds.com/2024/08/12/how-base-3-computing-beats-binary/feed/ 0 41309
Physicists Pinpoint the Quantum Origin of the Greenhouse Effect https://scienceandnerds.com/2024/08/09/physicists-pinpoint-the-quantum-origin-of-the-greenhouse-effect/ https://scienceandnerds.com/2024/08/09/physicists-pinpoint-the-quantum-origin-of-the-greenhouse-effect/#respond Fri, 09 Aug 2024 21:58:10 +0000 https://scienceandnerds.com/2024/08/09/physicists-pinpoint-the-quantum-origin-of-the-greenhouse-effect/ Source:https://www.quantamagazine.org/physicists-pinpoint-the-quantum-origin-of-the-greenhouse-effect-20240807/#comments Physicists Pinpoint the Quantum Origin of the Greenhouse Effect 2024-08-09 21:58:10 In 1896, the Swedish physicist Svante Arrhenius realized that carbon dioxide (CO2) traps heat in Earth’s atmosphere — the phenomenon now called the greenhouse effect. Since then, increasingly sophisticated modern climate models have verified Arrhenius’ central conclusion: that every time the CO2 concentration […]

The post Physicists Pinpoint the Quantum Origin of the Greenhouse Effect appeared first on Science and Nerds.

]]>
Source:https://www.quantamagazine.org/physicists-pinpoint-the-quantum-origin-of-the-greenhouse-effect-20240807/#comments

Physicists Pinpoint the Quantum Origin of the Greenhouse Effect

2024-08-09 21:58:10

In 1896, the Swedish physicist Svante Arrhenius realized that carbon dioxide (CO2) traps heat in Earth’s atmosphere — the phenomenon now called the greenhouse effect. Since then, increasingly sophisticated modern climate models have verified Arrhenius’ central conclusion: that every time the CO2 concentration in the atmosphere doubles, Earth’s temperature will rise between 2 and 5 degrees Celsius.

Still, the physical reason why CO2 behaves this way has remained a mystery, until recently.

First, in 2022, physicists settled a dispute over the origin of the “logarithmic scaling” of the greenhouse effect. That refers to the way Earth’s temperature increases the same amount in response to any doubling of CO2, no matter the raw numbers.

Then, this spring, a team led by Robin Wordsworth of Harvard University figured out why the CO2 molecule is so good at trapping heat in the first place. The researchers identified a strange quirk of the molecule’s quantum structure that explains why it’s such a powerful greenhouse gas — and why pumping more carbon into the sky drives climate change. The findings appeared in The Planetary Science Journal.

“It’s a really nice paper,” said Raymond Pierrehumbert, an atmospheric physicist at the University of Oxford who was not involved in the work. “It’s a good answer to all those people who say that global warming is just something that comes out of impenetrable computer models.”

To the contrary, global warming is tied to a numerical coincidence involving two different ways that CO2 can wiggle.

“If it weren’t for this accident,” Pierrehumbert said, “then a lot of things would be different.”

An Old Conclusion

How could Arrhenius understand the basics of the greenhouse effect before quantum mechanics was even discovered? It started with Joseph Fourier, a French mathematician and physicist who realized exactly 200 years ago that Earth’s atmosphere insulates the planet from the freezing cold of space, a discovery that launched the field of climate science. Then, in 1856, an American, Eunice Foote, observed that carbon dioxide is particularly good at absorbing radiation. Next, the Irish physicist John Tyndall measured the amount of infrared light that CO2 absorbs, showing the effect which Arrhenius then quantified using basic knowledge about Earth.

Earth radiates heat in the form of infrared light. The gist of the greenhouse effect is that some of that light, instead of escaping straight to space, hits CO2 molecules in the atmosphere. A molecule absorbs the light, then reemits it. Then another does. Sometimes the light heads back down toward the surface. Sometimes it heads up to space, leaving the Earth one iota cooler, but only after traversing a jagged path to the cold upper reaches of the atmosphere.

Using a cruder version of the same mathematical approach climate scientists take today, Arrhenius concluded that adding more CO2 would cause the planet’s surface to get warmer. It’s like adding insulation in your walls to keep your house warmer in the winter — heat from your furnace enters at the same rate, but it escapes more slowly.

A few years later, however, the Swedish physicist Knut Ångström published a rebuttal. He argued that CO2 molecules only absorb a specific wavelength of infrared radiation — 15 microns. And there was already enough of the gas in the atmosphere to trap 100% of the 15-micron light Earth emits, so adding more CO2 would do nothing.

What Ångström missed was that CO2 can absorb wavelengths slightly shorter or longer than 15 microns, though less readily. This light gets captured fewer times along its trip to space.

But that capture rate changes if the amount of carbon dioxide doubles. Now the light has twice the molecules to dodge before escaping, and it tends to get absorbed more times along the way. It escapes from a higher, colder layer of the atmosphere, so the outflow of heat slows to a trickle. It’s the heightened absorption of these near-15-micron wavelengths that’s responsible for our changing climate.

Despite the mistake, Ångström’s paper threw enough doubt on Arrhenius’s theory among his contemporaries that discussion of climate change more or less exited the mainstream for half a century. Even today, skeptics of the climate change consensus sometimes cite Ångström’s erroneous carbon “saturation” argument.

Back to Basics

In contrast to those early days, the modern era of climate science has moved forward largely by way of computational models that capture the many complex and chaotic facets of our messy, shifting atmosphere. For some, this makes the conclusions harder to understand.

“I’ve talked to a lot of skeptical physicists, and one of their objections is ‘You guys just run computer models, and then you take the answers from this black-box calculation, and you don’t understand it deeply,’” said Nadir Jeevanjee, an atmospheric physicist at the National Oceanic and Atmospheric Administration (NOAA). “It’s a little unsatisfying not to be able to explain to someone on a chalkboard why we get the numbers we get.”

Jeevanjee and others like him have set out to build a simpler understanding of the impact of CO2 concentration on the climate.

A key question was the origin of the logarithmic scaling of the greenhouse effect — the 2-to-5-degree temperature rise that models predict will happen for every doubling of CO2. One theory held that the scaling comes from how quickly the temperature drops with altitude. But in 2022, a team of researchers used a simple model to prove that the logarithmic scaling comes from the shape of carbon dioxide’s absorption “spectrum” — how its ability to absorb light varies with the light’s wavelength.

This goes back to those wavelengths that are slightly longer or shorter than 15 microns. A critical detail is that carbon dioxide is worse — but not too much worse — at absorbing light with those wavelengths. The absorption falls off on either side of the peak at just the right rate to give rise to the logarithmic scaling.

“The shape of that spectrum is essential,” said David Romps, a climate physicist at the University of California, Berkeley, who co-authored the 2022 paper. “If you change it, you don’t get the logarithmic scaling.”

The carbon spectrum’s shape is unusual — most gases absorb a much narrower range of wavelengths. “The question I had at the back of my mind was: Why does it have this shape?” Romps said. “But I couldn’t put my finger on it.”

Consequential Wiggles

Wordsworth and his co-authors Jacob Seeley and Keith Shine turned to quantum mechanics to find the answer.

Light is made of packets of energy called photons. Molecules like CO2 can absorb them only when the packets have exactly the right amount of energy to bump the molecule up to a different quantum mechanical state.

Carbon dioxide usually sits in its “ground state,” where its three atoms form a line with the carbon atom in the center, equidistant from the others. The molecule has “excited” states as well, in which its atoms undulate or swing about.

A photon of 15-micron light contains the exact energy required to set the carbon atom swirling about the center point in a sort of hula-hoop motion. Climate scientists have long blamed this hula-hoop state for the greenhouse effect, but — as Ångström anticipated — the effect requires too precise an amount of energy, Wordsworth and his team found. The hula-hoop state can’t explain the relatively slow decline in the absorption rate for photons further from 15 microns, so it can’t explain climate change by itself.

The key, they found, is another type of motion, where the two oxygen atoms repeatedly bob toward and away from the carbon center, as if stretching and compressing a spring connecting them. This motion takes too much energy to be induced by Earth’s infrared photons on their own.

But the authors found that the energy of the stretching motion is so close to double that of the hula-hoop motion that the two states of motion mix with one another. Special combinations of the two motions exist, requiring slightly more or less than the exact energy of the hula-hoop motion.

This unique phenomenon is called Fermi resonance after the famous physicist Enrico Fermi, who derived it in a 1931 paper. But its connection to Earth’s climate was only made for the first time in a paper last year by Shine and his student, and the paper this spring is the first to fully lay it bare.

“The moment when we wrote down the terms of this equation and saw that it all clicked together, it felt pretty incredible,” Wordsworth said. “It’s a result that finally shows us how directly the quantum mechanics links to the bigger picture.”

In some ways, he said, the calculation helps us understand climate change better than any computer model. “It just seems to be a fundamentally important thing to be able to say in a field that we can show from basic principles where everything comes from.”

Joanna Haigh, an atmospheric physicist and emeritus professor at Imperial College London, agreed, saying the paper adds rhetorical power to the case for climate change by showing that it is “based on fundamental quantum mechanical concepts and established physics.”

This January, NOAA’s Global Monitoring Laboratory reported that the concentration of CO2 in the atmosphere has risen from its preindustrial level of 280 parts per million to a record high 419.3 parts per million as of 2023, triggering an estimated 1 degree Celsius of warming so far.




Uncategorized



Source:https://www.quantamagazine.org/physicists-pinpoint-the-quantum-origin-of-the-greenhouse-effect-20240807/#comments

The post Physicists Pinpoint the Quantum Origin of the Greenhouse Effect appeared first on Science and Nerds.

]]>
https://scienceandnerds.com/2024/08/09/physicists-pinpoint-the-quantum-origin-of-the-greenhouse-effect/feed/ 0 41295
Grad Students Find Inevitable Patterns in Big Sets of Numbers https://scienceandnerds.com/2024/08/07/grad-students-find-inevitable-patterns-in-big-sets-of-numbers/ https://scienceandnerds.com/2024/08/07/grad-students-find-inevitable-patterns-in-big-sets-of-numbers/#respond Wed, 07 Aug 2024 21:58:05 +0000 https://scienceandnerds.com/2024/08/07/grad-students-find-inevitable-patterns-in-big-sets-of-numbers/ Source:https://www.quantamagazine.org/grad-students-find-inevitable-patterns-in-big-sets-of-numbers-20240805/#comments Grad Students Find Inevitable Patterns in Big Sets of Numbers 2024-08-07 21:58:05 Forty years later, in 1975, a mathematician named Endre Szemerédi proved the conjecture. His work spawned multiple lines of research that mathematicians are still exploring today. “Many of the ideas from his proof grew into worlds of their own,” said Yufei Zhao, […]

The post Grad Students Find Inevitable Patterns in Big Sets of Numbers appeared first on Science and Nerds.

]]>
Source:https://www.quantamagazine.org/grad-students-find-inevitable-patterns-in-big-sets-of-numbers-20240805/#comments

Grad Students Find Inevitable Patterns in Big Sets of Numbers

2024-08-07 21:58:05

Forty years later, in 1975, a mathematician named Endre Szemerédi proved the conjecture. His work spawned multiple lines of research that mathematicians are still exploring today. “Many of the ideas from his proof grew into worlds of their own,” said Yufei Zhao, Sah and Sawhney’s doctoral adviser at MIT.

Mathematicians have built on Szemerédi’s result in the context of finite sets of numbers. In this case, you start with a limited pool — every integer between 1 and some number N. What’s the largest fraction of the starting pool you can use in your set before you inevitably include a forbidden progression? And how does that fraction change as N changes?

For example, let N be 20. How many of these 20 numbers can you write down while still avoiding progressions that are, say, five or more numbers long? The answer, it turns out, is 16 — 80% of the starting pool.

Now let N be 1,000,000. If you use 80% of this new pool, you’re looking at sets that contain 800,000 numbers. It’s impossible for such large sets to avoid five-term progressions. You’ll have to use a smaller fraction of the pool.

Szemerédi was the first to prove that this fraction must shrink to zero as N grows. Since then, mathematicians have tried to quantify exactly how quickly that happens. Last year, breakthrough work by two computer scientists nearly solved this question for three-term progressions, like {6, 11, 16}.

But when you’re instead trying to avoid arithmetic progressions with four or more terms, the problem becomes tougher. “The thing I love about this problem is it just sounds so innocent, and it’s not. It really bites,” Sawhney said.

That’s because longer progressions reflect an underlying structure that is difficult for classical mathematical techniques to uncover. The numbers x, y and z in a three-term arithmetic progression always satisfy the simple equation x – 2y + z = 0. (Take the progression {10, 20, 30}, for instance: 10 – 2(20) + 30 = 0.) It’s relatively easy to prove whether or not a set contains numbers that satisfy this kind of condition. But the numbers in a four-term progression have to additionally satisfy the more complicated equation x2 – 3y2 + 3z2w2 = 0. Progressions with five or more terms must satisfy equations that are even more elaborate. This means that sets containing such progressions exhibit subtler patterns. It’s harder for mathematicians to show whether such patterns exist.

In the late 1990s, Timothy Gowers, a mathematician now at the Collège de France, developed a theory to overcome this obstacle. He was later awarded the Fields Medal, math’s highest honor, in part for that work. In 2001, he applied his techniques to Szemerédi’s theorem, proving a better bound on the size of the largest sets that avoid arithmetic progressions of any given length. While mathematicians used Gowers’ framework to tackle other problems over the next two decades, his 2001 record remained steadfast.

In 2022, Leng — then in his second year of graduate school at UCLA — set out to understand Gowers’ theory. He didn’t have Szemerédi’s theorem in mind; rather, he hoped to answer a technical question related to the techniques Gowers had developed. Other mathematicians, fearing that the effort needed to solve the problem would eclipse the result, tried to dissuade him. “For good reason,” Leng later said.

For more than a year, he didn’t get anywhere. But eventually, he started making progress. Sah and Sawhney, who had been thinking about related questions, learned about his work. They were intrigued. “I was amazed it’s even possible to think like this,” Sawhney said.

They realized that Leng’s research might help them make further progress on Szemerédi’s theorem. Within a few months, the three young mathematicians figured out how to get a better upper bound on the size of sets with no five-term progressions. They then extended their work to progressions of any length, marking the first advance on the problem in the 23 years since Gowers’ proof. Gowers had shown that, as your starting pool of numbers gets bigger, the progression-avoiding sets you can make get relatively smaller at a certain rate. Leng, Sah and Sawhney proved that this happens at a rate that’s exponentially faster.

“It’s a huge achievement,” Zhao said. “This is the kind of problem that I really would not suggest to any student because it is so incredibly hard.”

Mathematicians are even more excited by the method the trio used to get their new bound. For everything to work, they first had to strengthen an older, more technical result by Green, Terence Tao of UCLA and Tamar Ziegler of Hebrew University. Mathematicians feel that this result — a sort of elaboration of Gowers’ theory — can be improved even further. “It feels like we have an imperfect understanding of the theory,” Green said. “We’re just seeing a few shadows of it.”

Since completing the proof in February, Sawhney has finished his Ph.D. He is now a Clay Fellow at Columbia University. Sah is still at MIT, completing his graduate studies. But the pair’s collaboration has not yet slowed down. “Their incredible strength is taking something that is extremely technically demanding and understanding it and improving upon it,” said Zhao. “It’s difficult to overstate the level of their overall accomplishments.”




Uncategorized



Source:https://www.quantamagazine.org/grad-students-find-inevitable-patterns-in-big-sets-of-numbers-20240805/#comments

The post Grad Students Find Inevitable Patterns in Big Sets of Numbers appeared first on Science and Nerds.

]]>
https://scienceandnerds.com/2024/08/07/grad-students-find-inevitable-patterns-in-big-sets-of-numbers/feed/ 0 41259
What Is Analog Computing? https://scienceandnerds.com/2024/08/05/what-is-analog-computing/ https://scienceandnerds.com/2024/08/05/what-is-analog-computing/#respond Mon, 05 Aug 2024 21:58:09 +0000 https://scienceandnerds.com/2024/08/05/what-is-analog-computing/ Source:https://www.quantamagazine.org/what-is-analog-computing-20240802/#comments What Is Analog Computing? 2024-08-05 21:58:09 Computing today is almost entirely digital. The vast informational catacombs of the internet, the algorithms that power AI, the screen you’re reading this on — all are powered by electronic circuits manipulating binary digits — 0 and 1, off and on. We live, it has been said, in […]

The post What Is Analog Computing? appeared first on Science and Nerds.

]]>
Source:https://www.quantamagazine.org/what-is-analog-computing-20240802/#comments

What Is Analog Computing?

2024-08-05 21:58:09

Computing today is almost entirely digital. The vast informational catacombs of the internet, the algorithms that power AI, the screen you’re reading this on — all are powered by electronic circuits manipulating binary digits — 0 and 1, off and on. We live, it has been said, in the digital age.

But it’s not obvious why a system that operates using discrete chunks of information would be good at modeling our continuous, analog world. And indeed, for millennia humans have used analog computing devices to understand and predict the ebbs and flows of nature.

Among the earliest known analog computers is the Antikythera mechanism from ancient Greece, which used dozens of gears to predict eclipses and calculate the positions of the sun and moon. Slide rules, invented in the 17th century, executed the mathematical operations that would one day send men to the moon. (The abacus, however, doesn’t count as analog: Its discrete “counters” make it one of the earliest digital computers.) And in the late 19th century, William Thomson, who later became Lord Kelvin, designed a machine that used shafts, cranks and pulleys to model the influence of celestial bodies on the tides. Its successors were used decades later to plan for the beach landings on Normandy on D-Day.

What do these devices have in common? They are all physical systems set up to obey the same mathematical equations behind the phenomena you want to understand. Thomson’s tide-calculating computer, for example, was inspired by 19th-century mathematical advances that turned the question of predicting the tide into a complex trigonometric expression. Calculating that expression by hand was both laborious and error-prone. The cranks and pulleys in Thomson’s machine were configured so that by spinning them, the user would get an output that was identical to the result of the expression that needed to be solved.




Uncategorized



Source:https://www.quantamagazine.org/what-is-analog-computing-20240802/#comments

The post What Is Analog Computing? appeared first on Science and Nerds.

]]>
https://scienceandnerds.com/2024/08/05/what-is-analog-computing/feed/ 0 41229