Kicking at the Darkness

The spiraling dance of a pair of colliding black holes should last for billions of years. Yet we’ve caught about 10 black hole collisions since 2016 — far more than we would expect. Some process must be at work to accelerate the collision process, to make black holes come together more quickly than anticipated.

The trouble starts before black holes form in the first place. Black holes are essentially dead relics of massive stars. As these progenitor stars age, they pass through a phase when they expand into supergiant stars several times their original size. At this point, if two stars are orbiting close together, one will be subsumed into the other, and the pair will collide before ever becoming black holes.

This suggests that any pair of large black holes must start their existence extremely far apart — so far apart that collisions will be extremely rare. And yet, such collisions are fairly common. “Us theorists, we really like it when there is a new puzzle around,” said Smadar Naoz, a theoretical astrophysicist at the University of California, Los Angeles. “Everyone is jumping with new ideas.”

So how can you get tight-knit black hole pairs that were never tight-knit supergiant star pairs? One potential explanation holds that two massive stars could start far apart and grow closer as they collapse into black holes. Or perhaps some stars collapse without ever ballooning into supergiant stars, or solitary black holes meet one another and bind to form pairs.

In the past few years, another idea has appeared. Under the right conditions, a third object can trigger a process that brings a pair of objects closer together. This three-body effect provides a way for far-apart massive stars to first collapse into black holes, and then draw close enough to collide. And because massive stars are often in triple systems, researchers say it’s important to take this three-body effect into account.

To understand how this process might work, imagine the Earth and the moon spinning around each other. These two will trace steady orbits around their common center of mass almost indefinitely — unless something interferes.

A third object wouldn’t necessarily affect the stability of the Earth-moon system, so long as the three objects rotated in the same flat plane (as most every object in the solar system does).

Yet objects in space are not, in general, limited to a single, flat surface. Imagine the third object rotating around the Earth-moon system at an angle, so that the orbits are not aligned. If the angle between the orbits is large enough, gravitational effects from the third object could interfere with the orbits of the Earth and moon. Their paths will stretch out into long ellipses, which take the objects much farther apart before swinging them much closer together. When they’re closest, other effects can kick in to shrink their orbits even further. Eventually, the Earth and moon could crash into one another, with cataclysmic consequences for both.

In the world of black holes, this three-body process, or “channel,” comes in a few different flavors. The third object could be a stellar-mass black hole, or a massive star that has yet to collapse into one. It could even be one of the supermassive black holes found at the centers of most galaxies. In this case, two massive stars in the galactic center collapse to become black holes. This pair of smaller black holes and the supermassive black hole then make a three-body system. The supermassive black hole may even trigger special effects of general relativity that make the pair of smaller black holes more likely to merge, researchers reported in a paper posted to the scientific preprint site arxiv.org in June.

“The beauty of this channel is that there are very few uncertainties in the way the black holes merge,” said Fabio Antonini, an astrophysicist at the University of Surrey, who has published several papers on the idea. “It’s just gravity, it’s just dynamics.”

But like each of the other proposed formation channels for black hole mergers, the triple process has pieces that researchers still need to figure out. For example, it’s unclear how often the orbits in triple-star systems will be angled enough to trigger the effect.

One central advantage of this idea is that it can be tested. Black holes that merge through the triple process should have orbits that are less circular, or more eccentric, than those of black holes that merge from an undisturbed binary system. Scientists may be able to measure the eccentricities of black holes’ orbits in the near future, said Daniel Holz, an astrophysicist at the University of Chicago and a member of the LIGO collaboration, which searches for the gravitational waves coming from black hole collisions.

“Part of what makes those exciting is you might end up with systems, for example, with high eccentricity,” said Holz, who doesn’t study the triple-system process. “And if that’s something you could measure, then that would be kind of a smoking gun that something fancy is going on.”

The rotations of black holes may also tell scientists whether a black hole merger happened because of a triple-system process. If a black hole binary system formed through the evolution of two stars without the influence of other bodies, they should be rotating and orbiting more or less in the same direction — like two ice skaters spinning clockwise as they skate clockwise around each other. But according to the work of astrophysicists such as Dong Lai and Bin Liu of Cornell University, interference from other objects, like a third body in a triple system, can tilt the black hole orbits so that their orbital axes and rotation axes are at an angle to each other. The effect is hard to measure directly with current technology, but researchers hope to find clever new ways to infer these rotation alignments.

So while it’s still too early to tell exactly how black holes get close enough to merge, researchers are holding the problem up as an exemplar of why gravitational wave detections are so important. “You don’t want to just have gravitational wave observations for their own sake,” said Ilya Mandel, an astrophysicist at Monash University in Australia. “You want to use them as probes to study things which are otherwise hard to understand and hard to measure directly.”

Categories: Science News

The deep recesses of the number line are not as forbidding as they might seem. That’s one consequence of a major new proof about how complicated numbers yield to simple approximations.

The proof resolves a nearly 80-year-old problem known as the Duffin-Schaeffer conjecture. In doing so, it provides a final answer to a question that has preoccupied mathematicians since ancient times: Under what circumstances is it possible to represent irrational numbers that go on forever — like pi — with simple fractions, like $latex \frac{22}{7}$? The proof establishes that the answer to this very general question turns on the outcome of a single calculation.

“There’s a simple criterion for whether you can approximate virtually every number or virtually no numbers,” said James Maynard of the University of Oxford, co-author of the proof with Dimitris Koukoulopoulos of the University of Montreal.

Mathematicians had suspected for decades that this simple criterion was the key to understanding when good approximations are available, but they were never able to prove it. Koukoulopoulos and Maynard were able to do so only after they reimagined this problem about numbers in terms of connections between points and lines in a graph — a dramatic shift in perspective.

“They had what I’d say was a great deal of self-confidence, which was obviously justified, to go down the path they went down,” said Jeffrey Vaaler of the University of Texas, Austin, who contributed important earlier results on the Duffin-Schaeffer conjecture. “It’s a beautiful piece of work.”

The Ether of ArithmeticRational numbers are the easy numbers. They include the counting numbers and all other numbers that can be written as fractions.

This amenability to being written down makes rational numbers the ones we know best. But rational numbers are actually rare among all numbers. The vast majority are irrational numbers, never-ending decimals that cannot be written as fractions. A select few are important enough to have earned symbolic representations, such as pi, *e* and $latex \sqrt{2}$. The rest can’t even be named. They are everywhere but untouchable, the ether of arithmetic.

So maybe it’s natural to wonder — if we can’t express irrational numbers exactly, how close can we get? This is the business of rational approximation. Ancient mathematicians, for instance, recognized that the elusive ratio of a circle’s circumference to its diameter can be well approximated by the fraction $latex \frac{22}{7}$. Later mathematicians discovered an even better and nearly as concise approximation for pi: $latex \frac{355}{113}$.

“It’s hard to write down what pi is,” said Ben Green of Oxford. “What people have tried to do is to find explicit approximations to pi, and one common way of doing that is with rationals.”

In 1837 the mathematician Gustav Lejeune Dirichlet found a rule for how well irrational numbers can be approximated by rational ones. It’s easy to find approximations so long as you’re not too particular about the error. But Dirichlet proved a straightforward relationship between fractions, irrational numbers and the errors separating the two.

He proved that for every irrational number, there exist infinitely many fractions that approximate the number evermore closely. Specifically, the error of each fraction is no more than 1 divided by the square of the denominator. So the fraction $latex \frac{22}{7}$, for example, approximates pi to within $latex \frac{1}{7}^{2}$, or $latex \frac{1}{49}$. The fraction $latex \frac{355}{113}$ gets within $latex \frac{1}{113}^{2}$, or $latex \frac{1}{12,769}$. Dirichlet proved that there is an infinite number of fractions that draw closer and closer to pi as the denominator of the fraction increases.

“It’s a rather beautiful and remarkable thing that you can always approximate a real number by a fraction and the error is no more than 1 over ,” said Andrew Granville of the University of Montreal.

Dirichlet’s discovery was, in a sense, a narrow statement about rational approximation. It said that you can find infinitely many approximating fractions for each irrational number if your denominators can be any whole number, and if you’re willing to accept an error that’s 1 over the denominator squared. But what if you want your denominators to be drawn from some (still infinite) subset of the whole numbers, like all prime numbers, or all perfect squares? And what if you want your approximation error to be 0.00001, or any other values you might choose? Will you succeed at producing infinitely many approximating fractions under such specific conditions?

The Duffin-Schaeffer conjecture is an attempt to provide the most general possible framework for thinking about rational approximation. In 1941 the mathematicians R.J. Duffin and A.C. Schaeffer imagined the following scenario. First, choose an infinitely long list of denominators. This could be anything you want: all odd numbers, all numbers that are multiples of 10, or the infinite list of prime numbers.

Second, for each of the numbers in your list, choose how closely you’d like to approximate an irrational number. Intuition tells you that if you give yourself very generous error allowances, you’re more likely to be able to pull off the approximation. If you give yourself less leeway, it will be harder. “Any sequence can work provided you leave enough room,” Koukoulopoulos said.

Now, given the parameters you’ve set up — the numbers in your sequence and the defined error terms — you want to know: Can I find infinitely many fractions that approximate all irrational numbers?

The conjecture provides a mathematical function to evaluate this question. Your parameters go in as inputs. Its outcome could go one of two ways. Duffin and Schaeffer conjectured that those two outcomes correspond exactly to whether your sequence can approximate virtually all irrational numbers with the demanded precision, or virtually none. (It’s “virtually” all or none because for any set of denominators, there will always be a negligible number of outlier irrational numbers that can or can’t be well approximated.)

“You get virtually everything or you get virtually nothing. There’s no middle ground at all,” Maynard said.

It was an extremely general statement that tried to characterize the warp and weft of rational approximation. The criterion that Duffin and Schaeffer proposed felt correct to mathematicians. Yet proving that the binary outcome of this function is all you need to know whether your approximations work — that was much harder.

Double CountingProving the Duffin-Schaeffer conjecture is really about understanding exactly how much mileage you’re getting out of each of your available denominators. To see this, it’s useful to think about a scaled-down version of the problem.

Imagine that you want to approximate all irrational numbers between 0 and 1. And imagine that your available denominators are the counting numbers 1 to 10. The list of possible fractions is pretty long: First $latex \frac{1}{1}$, then $latex \frac{1}{2}$ and $latex \frac{2}{2}$, then $latex \frac{1}{3}$, $latex \frac{2}{3}$, $latex \frac{3}{3}$ and so on up to $latex \frac{9}{10}$ and $latex \frac{10}{10}$. Yet not all of these fractions are useful.

The fraction $latex \frac{2}{10}$ is the same as $latex \frac{1}{5}$, for example, and $latex \frac{5}{10}$ covers the same ground as $latex \frac{1}{2}$, $latex \frac{2}{4}$, $latex \frac{3}{6}$ and $latex \frac{4}{8}$. Prior to the Duffin-Schaeffer conjecture, a mathematician named Aleksandr Khinchin had formulated a similarly sweeping statement about rational approximation. But his theorem didn’t account for the fact that equivalent fractions should only count once.

“Usually something that’s first-grade mathematics shouldn’t make a difference to the solution,” Granville said. “But in this case surprisingly it did make a difference.”

So the Duffin-Schaeffer conjecture includes a term that calculates the number of unique fractions (also called reduced fractions) you get from each denominator. This term is called the Euler phi function after its inventor, the 18th-century mathematician Leonhard Euler. The Euler phi function of 10 is 4, since there are only four reduced fractions between 0 and 1 with 10 as a denominator: $latex \frac{1}{10}$, $latex \frac{3}{10}$, $latex \frac{7}{10}$ and $latex \frac{9}{10}$.

The next step is to figure out how many irrational numbers you can approximate with each of the reduced fractions. This depends on how much error you’re willing to accept. The Duffin-Schaeffer conjecture lets you choose an error for each of your denominators. So for fractions with denominator 7 you might set the allowable error to 0.02. With denominator 10 you might expect more and set it to 0.01.

Once you’ve identified your fractions and set your error terms, it’s time to go trawling for irrationals. Plot your fractions on the number line between 0 and 1 and picture the error terms as nets extending from either side of the fractions. You can say that all irrationals caught in the nets have been “well approximated” given the terms you set. The question — the big question — is: Just how many irrationals have you caught?

There are infinitely many irrational numbers contained in any interval on the number line, so the captured irrationals can’t be expressed as an exact number. Instead, mathematicians ask about the proportion of the total number of irrationals corralled by each fraction. They quantify these proportions using a concept called the “measure” of a set of numbers — which is like quantifying a catch of fish by total weight rather than number of fish.

The Duffin-Schaeffer conjecture has you add up the measures of the sets of irrational numbers captured by each approximating fraction. It represents this number as a large arithmetic sum. Then it makes its key prediction: If that sum goes off to infinity, then you have approximated virtually all irrational numbers; if that sum instead stops at a finite value, no matter how many measures you sum together, then you’ve approximated virtually no irrational numbers.

This question, of whether an infinite sum “diverges” to infinity or “converges” to a finite value, comes up in many areas of mathematics. The Duffin-Schaeffer conjecture’s main claim is that if you want to figure out whether you can approximate nearly all irrational numbers given a set of denominators and allowable error terms, this is the only feature you need to know: whether that infinite sum of measures diverges to infinity or converges to a finite value.

“At the end of the day, no matter how you’ve decided the degree of approximation for , whether or not you’ve succeeded purely depends on whether the associated infinite sequence diverges or not,” Vaaler said.

Plotting a SolutionYou may be wondering: What if the numbers approximated by one fraction overlap with the numbers approximated by another fraction? In that case aren’t you double-counting when you add up the measures?

For some approximation sequences the double-counting problem isn’t significant. Mathematicians proved decades ago, for example, that the conjecture is true for approximation sequences composed of all prime numbers. But for many other approximation sequences the double-counting challenge is formidable. It’s why mathematicians were unable to solve the conjecture for 80 years.

The extent to which different denominators capture overlapping sets of irrational numbers is reflected in the number of prime factors the denominators have in common. Consider the numbers 12 and 35. The prime factors of 12 are 2 and 3. The prime factors of 35 are 5 and 7. In other words, 12 and 35 have no prime factors in common — and as a result, there isn’t much overlap in the irrational numbers that can be well approximated by fractions with 12 and 35 in the denominator.

But what about the denominators 12 and 20? The prime factors of 20 are 2 and 5, which overlap with the prime factors of 12. Likewise, the irrational numbers that can be approximated by fractions with denominator 20 overlap with the ones that can be approximated by fractions with denominator 12. The Duffin-Schaeffer conjecture is hardest to prove in situations like these — where the numbers in the approximating sequence have many small prime factors in common and there’s a lot of overlap between the sets of numbers each denominator approximates.

“When a lot of the denominators you have to choose from have a lot of small prime factors then they start to get in the way of each other,” said Sam Chow of Oxford.

The key to solving the conjecture has been to find a way to precisely quantify the overlap in the sets of irrational numbers approximated by denominators with many small prime factors in common. For 80 years no one could do it. Koukoulopoulos and Maynard got there by finding a completely different way to look at the problem.

In their new proof, they create a graph out of their denominators — plotting them as points and connecting the points with a line if they share a lot of prime factors. The structure of this graph encodes the overlap between the irrational numbers approximated by each denominator. And while that overlap is hard to assay directly, Koukoulopoulos and Maynard found a way to analyze the structure of the graph using techniques from graph theory — and the information they cared about fell out from there.

“The graph is a visual aid, it’s a very beautiful language in which to think about the problem,” Koukoulopoulos said.

Koukoulopoulos and Maynard proved that the Duffin-Schaeffer conjecture is indeed true: If you’re handed a list of denominators with allowable error terms, you can determine whether you can approximate virtually all irrational numbers or virtually none just by checking whether the corresponding sum of the measures around each fraction diverges to infinity or converges to a finite value.

It’s an elegant test that takes a vast question about the nature of rational approximation and boils it down to a single calculable value. By proving that the test holds universally, Koukoulopoulos and Maynard have achieved one of the rarest feats in mathematics: They’ve given a final answer to a foundational concern in their field.

“Their proof is a necessary and sufficient result,” Green said. “I suppose this marks the end of a chapter.”

Categories: Science News

In 2015, the poet-turned-mathematician June Huh helped solve a problem posed about 50 years earlier. The problem was about complex mathematical objects called “matroids” and combinations of points and lines, or graphs. But it was also a question about polynomials — those familiar expressions from math class involving sums of variables raised to different powers.

At some point in school you were probably asked to combine, factor and simplify polynomials. For example, you may remember that *x*2 + 2*xy* + *y*2 = (*x* + *y*)2. That’s a neat algebra trick, but what is it actually good for? It turns out that polynomials excel at uncovering hidden structure, a fact Huh used to great effect in his proof. Here’s a simple puzzle that illustrates how.

Suppose a game requires seating two teams of players at a square table. To prevent cheating, you avoid seating players next to someone from their own team. How many different ways can the teams be seated?

Let’s start by imagining the players as red and blue. Suppose a red player is seated at the top of our diagram, as shown here.

There are two seats adjacent to the top spot — on the right and left — so to satisfy our rule, those seats must both be taken by blue players.

The one remaining seat, at the bottom, is adjacent to two blue players, so a red player must sit there.

Since no player is sitting next to a teammate, our constraint is satisfied.

We also could have started with a blue player at the top. Similar reasoning leads to the following arrangement.

Again, no player is seated next to a player from the same team. Our constraint is satisfied, so this is another possible seating. In fact, these are the only two possible seatings. Once we choose a color for the top seat, the colors of all the other seats are completely determined.

There’s a way we can see that there are only two possible seatings without drawing all the different pictures. Let’s start at the top: We have two options, red or blue, for that seat. Once we make that choice, we have one option (the other color) for both the right and the left seats. Now, for the bottom seat, there is again only one option: the color we started with. Using the “fundamental counting principle,” we know that the total number of possibilities is the product of the number of possibilities for each choice. That gives 2 × 1 × 1 × 1 = 2 total seatings, just as we determined with our diagrams.

Now let’s add a third team with a third color. Imagine there are red, blue and yellow players. How many different seatings are possible if adjacent seats can’t be the same color? Drawing all the possibilities would take a lot of diagrams, so let’s try our counting argument instead.

There are now three colors to choose from for the top. Once that choice is made, we can choose either of the two remaining colors for the right and left spots.

What happens at the bottom of our square? It’s tempting to say there is only one option for the last seat, since it is adjacent to both the left and right seats. But do you see the problem with that reasoning?

It’s true that if the left and right are different colors, there is only one option for the bottom seat. If, for example, left is blue and right is red, then the bottom must be yellow. But what if left and right are the same color? In that case, there will be two remaining options for the bottom. This final choice is dependent upon the previous choices, and that complicates our counting.

We must consider two separate cases: when the left and right colors are the same, and when the left and right colors are different.

If the left and right colors are the same, the number of possibilities for each side looks like this:

First, we have three options for the top. Next, there are two remaining options for the right. Since we are assuming the left and right seats have the same color, we have only one option for the left: the color we used for the right. Finally, since left and right are the same color, we can choose either of the other two other colors for the bottom seat. This gives us 3 × 2 × 1 × 2 = 12 possible seatings.

Now let’s look at the possibilities when the left and right seats are different colors:

Again we have three options for the top and two options for the right. For the left, we still have only one option, but for a different reason: It can’t be the same color as the top, which is adjacent, and it can’t be the same color as the right, by assumption. And since the right and the left are two different colors, that leaves only one color for the bottom (the same color as the top). This case yields 3 × 2 × 1 × 1 = 6 possible seatings.

Since these two scenarios cover all possibilities, we just add them to get 12 + 6 = 18 total possible seatings.

Adding a third color complicated our puzzle, but our hard work will be rewarded. We can now use this strategy for 4, 5, or any *q* different colors.

Regardless of how many colors we have to choose from, there will always be two cases to consider: the left and right being the same color, and the left and right being different colors. Suppose we have *q *different colors to choose from. Here’s the diagram showing the number of options for each side when the left and right are the same color:

To begin, we have *q *colors to choose from for the top, and *q* – 1 to choose from for the right. Since we are assuming that the left and right are the same color, we have only one option for the left. That leaves *q* – 1 options for the bottom, which can be any color other than the one at the left and right. The fundamental counting principle gives us a total of *q* × (*q* – 1) × 1 × (*q* – 1) = *q*(*q* – 1)2 possible seatings.

If the right and left are colored differently, we can enumerate our possibilities like this:

Again we have *q *options for the top and *q* – 1 for the right. Now, the left can’t be the same as the top or the right, so we have *q* – 2 options there. And the bottom can be any color that isn’t one of the two different colors used at the right or left, again leaving *q* – 2 options. This gives us a total of *q* × (*q* – 1) × (*q* – 2) × (*q* – 2) = *q*(*q* – 1)(*q* – 2)2 possible seatings. Since these two situations cover all the possibilities, we add them together to find the total number of possible seatings, just as before: *q*(*q* – 1)2+ *q*(*q* – 1)(*q* – 2)2.

This expression may seem like a strange answer to the question “How many ways can we seat different teams at a square table so that no two teammates are sitting next to each other?” But this polynomial conveys a lot of information about our problem. Not only does it tell us the numeric answers we want, it also uncovers some of the structure underlying our puzzle.

This particular polynomial is called a “chromatic polynomial,” because it answers the question: How many ways can you color the nodes of a network (or graph) so that no two adjacent nodes are the same color?

Our problem was initially about seating teams around a table, but we can easily turn it into a question about coloring the nodes of a graph. Instead of thinking about people around a table like this

we think of the people as nodes and then connect them with an edge if they are sitting next to each other.

Now, every coloring of the nodes of the graph can be thought of as a seating around the square table, where “sitting adjacent to someone” at the table is now “being connected by an edge” on the graph.

Now that we have represented our puzzle as a graph, let’s return to its chromatic polynomial. We’ll call it *P*(*q*).

*P*(*q*) = *q*(*q* – 1)2 + *q*(*q* – 1)(*q* – 2)2

The great thing about this polynomial is that it answers our graph coloring question for all possible numbers of colors.

For example, to answer the question for three colors, we let *q* = 3, giving us:

*P*(3) = 3(3 – 1)2 + 3(3 – 1)(3 – 2)2 = 3 × 22 + 3 × 2 × 12 = 12 + 6 = 18

This is precisely the answer we found for the three-team puzzle above. And when we let *q* = 2:

*P*(2) = 2(2 – 1)2 + 2(2 – 1)(2 – 2)2 = 2 × 12 + 2 × 1 × 02 = 2 + 0 = 2

Look familiar? This is the answer to our original puzzle with only two different teams. We can find the answer for four, five or even 10 different teams, just by plugging in the appropriate value for *q*: *P*(4) = 84, *P*(5) = 260 and *P*(10) = 6,570. The chromatic polynomial has captured some fundamental structure of the problem by generalizing our counting strategy.

We can expose more structure by doing a little algebra with our polynomial *P*(*q*) = *q*(*q –* 1)2 + *q*(*q – *1)(*q – *2)2:

$latex \begin{align*}

&= q(q-1)(q-1)+q(q-1)(q-2)^2\\

&= q(q-1)((q-1)+(q-2)^2)\\

&= q(q-1)(q-1+q^2-4q+4)\\

&= q(q-1)(q^2-3q+3)\\

\end{align*}$

Here we have factored *q*(*q* – 1) out of each part of our sum and then combined like terms, putting the polynomial into a “factored form,” where it is expressed as a product. And in this factored form, a polynomial can tell us about structure through its “roots.”

The roots of a polynomial are the inputs that yield zero as an output. And the factored form of a polynomial makes finding the roots easy: Since the polynomial is expressed as a product of factors, any input that makes one of the factors zero will make the whole product, and thus the polynomial, zero.

For example, our polynomial *P*(*q*) = *q*(*q *– 1)(*q*2 – 3*q* + 3) has a factor of (*q* – 1). If we let *q *= 1, this factor becomes zero, which makes the entire product zero. That is, *P*(1) = 1(1 – 1)(12 – 3 × 1 + 3) = 1 × 0 × 1 = 0. Similarly, *P*(0) = 0 × (–1) × 3 = 0. So *q* = 1 and *q* = 0 are roots of our polynomial. (You might be wondering about (*q*2 – 3*q* + 3). Since no real number makes this factor zero, it contributes no real roots to our chromatic polynomial.)

These algebraic roots have meaning in our graph. If we had only one color to choose from, every node would have to be the same color. It would be impossible to color the graph so that no two adjacent nodes were the same color. But this is exactly what it means for *q* = 1 to be a root of the chromatic polynomial. If *P*(1) = 0, then there are zero ways to color the graph so that adjacent nodes are not the same color. The same is true if we had zero colors to work with: *P*(0) = 0. The roots of the chromatic polynomial are telling us about the structure of our graph.

The power to see this structure algebraically becomes even more apparent when we start looking at other graphs. Let’s examine the triangular graph below.

How many ways are there to color this graph using *q* colors so that no two adjacent nodes have the same color?

As usual, there are *q* and *q* – 1 options for the first two adjacent nodes. And since the remaining node is adjacent to the first two, it must be a different color than both, leaving *q – 2* options. This makes the chromatic polynomial of this triangular graph: *P*(*q*) = *q*(*q* – 1)(*q* – 2).

In its factored form, this chromatic polynomial tells us something interesting: *q* = 2 is a root. And if *P*(2) = 0, it must be impossible to color this graph with two colors so that no two adjacent nodes are the same. Is that really true?

Well, imagine working your way around the loop of the triangle, coloring the nodes as you go. With only two colors to choose from, you must alternate colors as you pass each node: If the first is red, then the second must be blue, which then means the third one must be red. But the first and third nodes are adjacent, so they can’t both be red. Two colors aren’t enough, just as the polynomial predicted.

Using this alternating argument, you can deduce a powerful generalization: The chromatic polynomial of any loop with an odd number of nodes must have 2 as a root. This is because if you alternate between two colors over a loop of odd length, the first and last nodes you color will always be the same. But since it’s a loop, they are adjacent. It can’t be done.

For example, we can use various techniques to determine that a loop with five nodes has the chromatic polynomial *P*(*q*) = *q*5 – 5*q*4 + 10*q*3 – 10*q*2* *+ 4*q*. When we put this into factored form, it becomes *P*(*q*) = *q*(*q *– 1)(*q *– 2)(*q*2 – 2*q* + 2). And as expected, we see that *q* = 2 is a root, and so *P*(2) = 0. Remarkably, once we establish these connections between the graphs and their polynomials, the insights cut both ways: Polynomials can tell us about the structure of graphs, and graphs can tell us about the structure of their associated polynomials.

It was a search for structure that led June Huh to prove Read’s 40-year-old conjecture about chromatic polynomials. The conjecture says that when we list the coefficients of a chromatic polynomial in order and ignore their signs, a particular condition is satisfied: Namely, the square of any coefficient must be at least the product of the two adjacent coefficients. For example, in the chromatic polynomial of our five-node loop, *P*(*q*) =*q*5* *– 5*q*4 + 10*q*3 – 10*q*2 + 4*q*, we see that 52 ≥ 1 × 10, 102 ≥ 5 × 10 and 102 ≥ 10 × 4. One thing this shows is that not every polynomial could be a chromatic polynomial: There is a deeper structure imposed on chromatic polynomials by their connections to graphs. What’s more, the connections between these polynomials and other fields led Huh and his collaborators to settle a much broader open question, Rota’s conjecture, a few years after proving Read’s conjecture.

Polynomials may be best known in their worst incarnation: as abstract exercises in the formal manipulation of algebraic expressions. But polynomials and their features — their roots, their coefficients, their various forms — help expose structure in surprising places, creating connections to the algebra all around us.

Exercises1. A “complete” graph is a graph in which there is an edge between every pair of vertices. Find the chromatic polynomial of the complete graph on five vertices.

Click for Answer 1:

Since every vertex is adjacent to every other vertex, five different colors must be used. We can use our counting argument to find the chromatic polynomial *P*(*q*) = *q*(*q* – 1)(*q* – 2)(*q* – 3)(*q* – 4). What would this look like for a complete graph on *n* vertices?

2. Find the chromatic polynomial of the graph below. (Hint: Use what you know about chromatic polynomials of simpler graphs.)

Click for Answer 2:

This graph is a four-node loop connected to a three-node loop. We start our counting argument with *q* choices for the middle node. If we proceed to the left, we’ll find the chromatic polynomial of a four-node loop, which is *P*(*q*) = *q*(*q* – 1)(*q*2 – 3*q* + 3). If we proceed to the right, we’ll find the chromatic polynomial of a three-node loop, which is *P*(*q*) = *q*(*q* – 1)(*q* – 2). Taking into consideration their common node with *q* choices, we can combine the results to get * P*(*q*) = *q*(*q* – 1)(*q*2 – 3*q* + 3)(*q* – 1)(*q* – 2) = *q*(*q* – 1)2(*q* – 2)(*q**2* – 3*q* + 3).

3. A graph is called “bipartite” if its vertices can be divided into two groups, *A* and *B*, so that the vertices in *A* are adjacent only to vertices in *B*, and the vertices in *B* are adjacent only to vertices in *A*. Suppose a graph *G* has chromatic polynomial *P*(*q*). What fact about *P*(*q*) would allow you to conclude that *G* is bipartite?

Click for Answer 3:

First notice that a graph is bipartite if and only if it is two-colorable. This means that, using only two colors, we can color the vertices of the graph so that no two adjacent vertices have the same color. If a graph is bipartite, we simply color the two different groups of vertices different colors. And if a graph is two-colorable, the coloring of the graph naturally determines the two groups. Thus, being bipartite is essentially the same as being two-colorable. And if a graph is two-colorable, that means there is at least one way to color the graph using two colors. So if *P*(*q*) is the chromatic polynomial of the graph, it must be the case that *P*(2) > 0. Similarly, the famous four-color theorem can also be stated in terms of chromatic polynomials.

(function(){ var triggers = [].slice.call(document.querySelectorAll('.reveal-next')); function clickHandler(e) { e.preventDefault(); e.target.nextElementSibling.style.cssText = ''; } for (var i = 0; i < triggers.length; i++){ triggers.addEventListener('click', clickHandler); } })(window)

Categories: Science News

The developing embryo is a finely tuned machine. Its cells know what to do, and when to do it. They know to grow or shrink, to divide or lie dormant, to come together into a beating heart or hurtle through the bloodstream in search of a distant invader. And they know to do all that without a central command station or an objective map of their surroundings to guide them.

Instead, cells are left to devise their own strategies for calculating precisely where to go and what to become. Those calculations depend on a veritable cocktail of signals, some of which have long been established as obviously important — chemical and electrical gradients, the activity of gene networks, patterns of overlap between spreading fields of molecules.

But recently experts have also started to pay attention to another, often overlooked set of factors: physical constraints such as size. In new work published today in *Nature Physics*, a team of researchers reported that during the early development of the roundworm *Caenorhabditis elegans*, a mechanism based on the size of embryonic cells helps to determine the type of mature tissues they will eventually produce.

While examining the biochemical process that triggers cells to divide either asymmetrically or symmetrically, the scientists discovered that size was the ruling element — namely, that the size of the cells dictated the pattern that led to one kind of division or another, and ultimately to one kind of lineage or another. “The biology is actually exploiting this fact … to generate a set of outcomes that are required for the development of the organism,” said Martin Howard, a computational biologist at the John Innes Center in England who did not participate in the study. In this case, cells used innate constraints on their size to specify the lineage that would later give rise to the worm’s sex cells. But more broadly, the findings also point to the possibility of a role for physical cues in the behavior of stem cells and the operation of other developmental systems.

Broken SymmetriesJust as the universe was born from the breaking of symmetry, so are each of the animals and plants that inhabit the Earth. During early embryonic development, cells undergo at least one asymmetric division and sometimes several more: They produce daughter cells that differ in size and fate, laying the foundation for the later specification of various distinct cell types. To cement these budding lineages, and to stop creating new ones, the cells then have to shift gears and start dividing symmetrically.

For instance, when the worm embryo is still a single cell, proteins on its outer membrane create two uneven, yin-and-yang-like domains that tell the cell where to split. That system for designating asymmetric cell division is called polarity. One lineage of embryonic cells in *C. elegans* (known as the P lineage) uses polarity to divide asymmetrically four times; the fifth division is then symmetric, permanently establishing the germline responsible for egg and sperm cells.

This polarization system resembles the Turing-pattern model for the mechanisms that may guide the formation of spots or stripes. A spot may form on a leopard, for example, because one activator molecule diffuses through skin tissue and stimulates the production of pigment while an inhibitor molecule suppresses pigmentation in the surrounding region. The size and distribution of the spots therefore depends on kinetic factors such as how quickly each of the molecules diffuses.

That’s what happens with polarity, too. Two proteins that exclude each other activate on the cell membrane at opposite ends of the cell, so that where one is present, the other can’t diffuse. They move at different rates, and at the border between their two asymmetric domains, the cell divides. One daughter remains part of the P lineage, while the other is destined for another fate. As with Turing patterns, the system works because it strikes a careful balance between its size and how quickly the proteins spread.

Nathan Goehring, a molecular biologist at the Francis Crick Institute in London, wanted to probe that balance. He and his colleagues played around with previously published polarization models, testing what would happen when they made their theoretical cells larger or smaller. Their simulations indicated that if a cell got too big, more than two protein domains would emerge, leading to loss of polarity.

A more interesting result was that when the cell got too small, only one domain dominated, uniformly diffusing across the membrane. Again, polarity broke down, this time leaving symmetric division as the only option. The threshold cell circumference at which that happened hovered around 41 microns.

To Goehring, that figure looked familiar. In an early *C. elegans *embryo, cells don’t grow between divisions, meaning that as they divide, they get smaller and smaller. And 41 microns was strikingly similar to the size of the last cells to divide asymmetrically in the P lineage. Could it be that the daughters of those cells were simply too small to polarize — that size was the determining factor in the switch to symmetric division and the designation of their fate?

To find out, the scientists measured the kinetic properties of an important polarizing protein in normal *C. elegans *embryos and in embryos whose sizes they had genetically manipulated. As expected, the protein’s diffusion rate and other qualities did not change, even when the cells got bigger or smaller. Instead, the patterning system had its own intrinsic scale, one that didn’t adjust to the overall size of the cell.

By controlling the sizes of the initial embryos, the team was then able to show that there was a minimum size threshold for the P lineage cells, below which they could not set up the polarization pattern. Those smaller cells lost the ability to polarize after just three cell divisions, not four. “Just by manipulating the size of the embryo, we’ve taken a cell that normally would be able to polarize and divide asymmetrically and turned it into a cell that doesn’t polarize and divides symmetrically,” Goehring said.

Moreover, a perusal of previous research revealed that two other worm species have one extra asymmetric division in their P lineage. Their P lineage cells tend to start bigger (and stay bigger) than those of the early *C. elegans *embryo, in keeping with Goehring’s theory. Whether the same mechanism is truly at work in those species remains to be tested, but it doesn’t seem like a coincidence.

Cells have seemingly evolved to take advantage of the intrinsic limitations of their patterning process — using it as a ruler of sorts — to determine whether to become germ cells. “The specification is a kind of self-organized property of the patterning system,” Howard said.

And that’s a “genuinely interesting” way to think about the system, said Timothy Saunders, a biophysicist at the Mechanobiology Institute of the National University of Singapore who was not involved in the study. “This idea, that by just simply making things smaller you can naturally switch the type of division, is very neat.”

A New PerspectiveThese findings come at a time when scientists are widening their view of what controls biological systems to encompass more than genetics alone. “The gene does not exist in a vacuum,” Saunders said. “And we’re realizing more and more that the mechanical environment in which those genes are operating matters” — including for decisions about cell fate.

Researchers have found, for instance, that cancer cells respond to the stiffness of the surrounding tissue and other environmental factors. Stem cells, when subjected to certain physical forces, can also be induced to change their behavior and fate. And self-organizing models of tissues called organoids can’t grow properly on a flat dish. “I think our data nicely fits into that idea, that the physical environment matters, that the cells are measuring these things,” Goehring said. “In this case … the cells are sensing size.”

While this work focused on a particular cell lineage at a particular stage of development in a particular species, “it’s something that could have more general resonance,” Howard said. The polarity network in the P lineage is strongly conserved among animal species, including in humans. But because mammalian cells lack a “nice clean topology,” this will be particularly challenging to study, according to Saunders — though he expects it to still be relevant in those systems.

Other processes that involve cells getting smaller with each division might also be using size to make decisions about fate. Neural stem cells in flies and certain plant cells, for instance, shrink with each division until they stop dividing altogether. “While we don’t know that there’s a strict size sensor in any of those systems,” Goehring said, “it’s sort of consistent with this idea that there may be size-dependent switches in fate.”

The same goes for stem cells in the mammalian gut, which divide rapidly in spatially constrained crypts. They need to choose when to divide asymmetrically (into one stem cell and one specialized cell) or symmetrically (into either two stem cells or two specialized cells). That choice is critical for maintaining stem cell populations in the organism — and it’s not always clear how it’s getting made.

Perhaps cell size will once more turn out to play a role. “I think that idea is something that’s going to be universal,” Goehring said.

There are already hints of that. Cells seem to build specific numbers of organelles, in specific proportions, by using their cytoplasm as a limiting pool of gradually depleted building blocks. And in a paper published in *Developmental Cell* in June, a team of researchers proposed a model for how the embryo’s genome gets activated after fertilization. According to their work, that happens only after cells reach a certain size threshold: As the early frog embryo divides, its cells get smaller, and it has less cytoplasm relative to its DNA. As the concentration of a particular type of DNA-condensing protein decreases, it frees up more and more of the genome to be expressed, until finally transcription gets turned on.

Of course, in all this work, questions remain, particularly about how the systems stay resilient to natural variations in cell size, and how size might affect differentiation much later in development. Still, Goehring said, it’s now crucial to turn to processes “that maybe we haven’t looked at because we haven’t been thinking about size.”

“We need these sorts of theoretical ideas in order to unlock how is working,” Howard said.

Categories: Science News

In 1998, two teams of cosmologists observed dozens of distant supernovas and inferred that they’re racing away from Earth faster and faster all the time. This meant that — contrary to expectations — the expansion of the universe is accelerating, and thus the fabric of space must be infused with a repulsive “dark energy” that comprises more than two-thirds of everything. For this discovery, the team leaders, Saul Perlmutter of the Supernova Cosmology Project and Brian Schmidt and Adam Riess of the High-Z Supernova Search Team, won the 2011 Nobel Prize in Physics.

Fast forward to July of this year.

On a Monday morning three weeks ago, many of the world’s leading cosmologists gathered in Santa Barbara, California, to discuss a major predicament. Riess, now 49, strolled to the front of a seminar room to give the opening talk. A bulldog of a man in a short-sleeved box-check shirt, Riess laid out the evidence, gathered by himself and others, that the universe is currently expanding too fast — faster than theorists predict when they extrapolate from the early universe to the present day. “If the late and early universe don’t agree, we have to be open to the possibility of new physics,” he said.

At stake is the standard theory of the cosmos that has reigned since the discovery of dark energy. The theory, called ΛCDM, describes all the visible matter and energy in the universe, along with dark energy (represented by the Greek letter Λ, or lambda) and cold dark matter (CDM), showing how they evolve according to Albert Einstein’s theory of gravity. ΛCDM perfectly captures features of the early universe — patterns best seen in ancient microwaves coming from a critical moment when the cosmos was 380,000 years old. Since the Planck Space Telescope’s first map of this “cosmic microwave background” was released in 2013, scientists have been able to precisely infer a distance scale in the young universe and use ΛCDM to fast-forward from the 380,000-year-mark to now, to predict the current rate of cosmic expansion — known as the Hubble constant, or H0.

The Planck team predicts that the universe should expand at a rate of 67.4 kilometers per second per megaparsec. That is, as you look farther into space, space should be receding 67.4 kilometers per second faster for each megaparsec of distance, just as two Sharpie marks on an expanding balloon separate faster the farther apart they are. Measurements of other early-universe features called “baryon acoustic oscillations” yield exactly the same prediction: H0 = 67.4. Yet observations of the actual universe by Riess’s team have suggested for six years that the prediction is off.

That July morning in a room with an obstructed view of the Pacific, Riess seemed to have a second Nobel Prize in his sights. Among the 100 experts in the crowd — invited representatives of all the major cosmological projects, along with theorists and other interested specialists — nobody could deny that his chances of success had dramatically improved the Friday before.

Ahead of the conference, a team of cosmologists calling themselves H0LiCOW had published their new measurement of the universe’s expansion rate. By the light of six distant quasars, H0LiCOW pegged H0 at 73.3 kilometers per second per megaparsec — significantly higher than Planck’s prediction. What mattered was how close H0LiCOW’s 73.3 fell to measurements of H0 by SH0ES — the team led by Riess. SH0ES measures cosmic expansion using a “cosmic distance ladder,” a stepwise method of gauging cosmological distances. SH0ES’ latest measurement in March pinpointed H0 at 74.0, well within H0LiCOW’s error margins.

“My heart was aflutter,” Riess told me, of his early look at H0LiCOW’s result two weeks before Santa Barbara.

For six years, the SH0ES team claimed that it had found a discrepancy with predictions based on the early universe. Now, the combined SH0ES and H0LiCOW measurements have crossed a statistical threshold known as “five sigma,” which typically signifies a discovery of new physics. If the Hubble constant is not 67 but actually 73 or 74, then ΛCDM is missing something — some factor that speeds up cosmic expansion. This extra ingredient added to the familiar mix of matter and energy would yield a richer understanding of cosmology than the rather bland ΛCDM theory provides.

During his talk, Riess said of the gulf between 67 and 73, “This difference appears to be robust.”

“I know we’ve been calling this the ‘Hubble constant tension,’” he added, “but are we allowed yet to call this a problem?”

He put the question to fellow Nobel laureate David Gross, a particle physicist and the former director of the Kavli Institute for Theoretical Physics (KITP), where the conference took place.

“We wouldn’t call it a tension or problem, but rather a crisis,” Gross said.

“Then we’re in crisis.”

To those trying to understand the cosmos, a crisis is the chance to discover something big. Lloyd Knox, a member of the Planck team, spoke after Riess. “Maybe the Hubble constant tension is the exciting breakdown of ΛCDM that we’ve all been, or many of us have been, waiting and hoping for,” he said.

The Hubble Constant SurdWhen talks ended for the day, many attendees piled into a van bound for the hotel. We drove past palm trees with the ocean on the right and the Santa Ynez Mountains to the distant left. Wendy Freedman, a decorated Hubble constant veteran, perched in the second row. A thin, calm woman of 62, Freedman led the team that made the first measurement of H0 to within 10% accuracy, arriving at a result of 72 in 2001.

The driver, a young, bearded Californian, heard about the Hubble trouble and the issue of what to call it. Instead of tension, problem or crisis, he suggested “surd,” meaning nonsensical or irrational. The Hubble constant surd.

Freedman, however, seemed less giddy than the average conferencegoer about the apparent discrepancy and wasn’t ready to call it real. “We have more work to do,” she said quietly, almost mouthing the words.

Freedman spent decades improving H0 measurements using the cosmic distance ladder method. For a long time, she calibrated her ladder’s rungs using cepheid stars — the same pulsating stars of known brightness that SH0ES also uses as “standard candles” in its cosmic distance ladder. But she worries about unknown sources of error. “She knows where all the skeletons are buried,” said Barry Madore, Freedman’s white-whiskered husband and close collaborator, who sat up front next to the driver.

Freedman said that’s why she, Madore and their Carnegie-Chicago Hubble Program (CCHP) set out several years ago to use “tip of the red giant branch” stars (TRGBs) to calibrate a new cosmic distance ladder. TRGBs are what stars like our sun briefly turn into at the end of their lives. Bloated and red, they grow brighter and brighter until they reach a characteristic peak brightness caused by the sudden igniting of helium in their cores. Freedman, Madore and Myung Gyoon Lee first pointed out in 1993 that these peaking red giants can serve as standard candles. Now Freedman had put them to work. As we unloaded from the van, I asked her about her scheduled talk. “It’s the second talk after lunch tomorrow,” she said.

“Be there,” said Madore, with a gleam in his eye, as we parted ways.

When I got to my hotel room and checked Twitter, I found that everything had changed. Freedman, Madore and their CCHP team’s paper had just dropped. Using tip-of-the-red-giant-branch stars, they’d pegged the Hubble constant at 69.8 — notably short of SH0ES’ 74.0 measurement using cepheids and H0LiCOW’s 73.3 from quasars, and more than halfway to Planck’s 67.4 prediction. “The Universe is just messing with us at this point, right?” one astrophysicist tweeted. Things were getting surd.

Dan Scolnic, a bespectacled young member of SH0ES based at Duke University, said that he, Riess and two other team members had gotten together, “trying to figure out what was in the paper. Adam and I then went out to dinner and we were pretty perplexed, because in what we had seen up to this point, the cepheids and TRGBs were in really good agreement.”

They soon homed in on the key change in the paper: a new way of measuring the effects of dust when gauging the intrinsic brightness of TRGBs — the first rung of the cosmic distance ladder. “We had a bunch of questions about this new method,” Scolnic said. Like other participants scattered throughout the Best Western Plus, they eagerly awaited Freedman’s talk the next day. Scolnic tweeted, “Tomorrow is going to be interesting.”

To Build a Distance Ladder*Tension*, *problem*, *crisis*, *surd* — there has been a Hubble constant *something* for 90 years, ever since the American astronomer Edwin Hubble’s plots of the distances and recessional speeds of galaxies showed that space and everything in it is receding from us (Hubble’s own refusal to accept this conclusion notwithstanding). One of the all-time greatest cosmological discoveries, cosmic expansion implies that the universe has a finite age.

The ratio of an object’s recessional speed to its distance gives the Hubble constant. But whereas it’s easy to tell how fast a star or galaxy is receding — just measure the Doppler shift of its frequencies, an effect similar to a siren dropping in pitch as the ambulance drives away — it’s far harder to tell the distance of a pinprick of light in the night sky.

It was Henrietta Leavitt, one of the human “computers” at the Harvard College Observatory, who discovered in 1908 that cepheid stars pulsate with a frequency that’s proportional to their luminosity. Big, bright cepheids pulsate more slowly than small, dim ones (just as a big accordion is harder to compress than a tiny one). And so, from the pulsations of a distant cepheid, you can read off how intrinsically bright it is. Compare that to how faint the star appears, and you can tell its distance — and the distance of the galaxy it’s in.

In the 1920s, Hubble used cepheids and Leavitt’s law to infer that Andromeda and other “spiral nebulae” (as they were known) are separate galaxies, far beyond our Milky Way. This revealed for the first time that the Milky Way isn’t the whole universe — that the universe is, in fact, unimaginably vast. Hubble then used cepheids to deduce the distances to nearby galaxies, which, plotted against their speeds, revealed cosmic expansion.

Hubble overestimated the rate as 500 kilometers per second per megaparsec, but the number dropped as cosmologists used cepheids to calibrate evermore accurate cosmic distance ladders. From the 1970s on, the eminent observational cosmologist and Hubble protégé Allan Sandage argued that H0 was around 50. His rivals claimed a value around 100, based on different astronomical observations. The vitriolic 50-versus-100 debate was raging in the early ’80s when Freedman, a young Canadian working as a postdoc at the Carnegie Observatories in Pasadena, California, where Sandage also worked, set out to improve cosmic distance ladders.

To build a distance ladder, you start by calibrating the distance to stars of known luminosity, such as cepheids. These standard candles can be used to gauge the distances to fainter cepheids in farther-away galaxies. This gives the distances of “Type 1a supernovas” in the same galaxies — predictable stellar explosions that serve as much brighter, though rarer, standard candles. You then use these supernovas to gauge the distances to hundreds of farther-away supernovas, in galaxies that are freely moving in the current of cosmic expansion, known as the “Hubble flow.” These are the supernovas whose ratio of speed to distance gives H0.

But although a standard candle’s faintness is supposed to tell its distance, dust also dims stars, making them look farther away than they are. Crowding by other stars can make them look brighter (and thus closer). Furthermore, even supposed standard-candle stars have inherent variations due to age and metallicity that must be corrected for. Freedman devised new methods to deal with many sources of systematic error. When she started getting H0 values higher than Sandage’s, he became antagonistic. “To him, I was a young upstart,” she told me in 2017. Nevertheless, in the ’90s she assembled and led the Hubble Space Telescope Key Project, a mission to use the new Hubble telescope to measure distances to cepheids and supernovas with greater accuracy than ever before. The H0 value of 72 that her team published in 2001 split the difference in the 50-versus-100 debate.

Freedman was named director of Carnegie Observatories two years later, becoming Sandage’s boss. She was gracious and he softened. But “until his dying day,” she said, “he believed that the Hubble constant had a very low value.”

A few years after Freedman’s measurement of 72 to within 10% accuracy, Riess, who is a professor at Johns Hopkins University, got into the cosmic distance ladder game, setting out to nail H0 within 1% in hopes of better understanding the dark energy he had co-discovered. Since then, his SH0ES team has steadily tightened the ladder’s rungs — especially the first and most important: the calibration step. As Riess put it, “How far away is anything? After that, life gets easier; you’re measuring relative things.” SH0ES currently uses five independent ways of measuring the distances to their cepheid calibrators. “They all agree quite well, and that gives us a lot of confidence,” he said. As they collected data and improved their analysis, the error bars around H0 reduced to 5% in 2009, then 3.3%, then 2.4%, then 1.9% as of March.

Meanwhile, since 2013, the Planck team’s increasingly precise iterations of its cosmic microwave background map have enabled it to extrapolate the value for H0 evermore precisely. In its 2018 analysis, Planck found H0 to be 67.4 with 1% accuracy. With Planck and SH0ES more than “four sigma” apart, a desperate need arose for independent measurements.

Tommaso Treu, one of the founders of H0LiCOW and a professor at the University of California, Los Angeles, had dreamed ever since his student days in Pisa of measuring the Hubble constant using time-delay cosmography — a method that skips the rungs of the cosmic distance ladder altogether. Instead, you directly determine the distance to quasars — the flickering, glowing centers of faraway galaxies — by painstakingly measuring the time delay between different images of a quasar that form as its light bends around intervening matter.

But while Treu and his colleagues were collecting quasar data, Freedman, Madore and their graduate students and postdocs were pivoting to tip-of-the-red-giant-branch stars. Whereas cepheids are young and found in the crowded, dusty centers of galaxies, TRGBs are old and reside in clean galactic outskirts. Using the Hubble Space Telescope to observe TRGB stars in 15 galaxies that also contain Type 1a supernovas, Freedman’s CCHP team was able to extend their ladder to supernovas in the Hubble flow and measure H0, as an additional point of comparison for Planck’s 67.4 and SH0ES’ 74.0.

“At some level I guess the expectation in your own head is, ‘OK, you’re going to come out one way or the other,’ right?” Freedman told me. “And you sort of … fall in the middle. And, ‘Oh! That’s interesting. OK.’ And that’s where we came out.”

Stuck in the MiddleMy seatmate on the van the morning after Freedman’s paper dropped was a theorist named Francis-Yan Cyr-Racine, of the University of New Mexico. Earlier this year, he, Lisa Randall of Harvard University, and others proposed a possible solution to the Hubble constant tension. Their idea — a new, short-lived field of repulsive energy in the early universe — would speed up cosmic expansion, matching predictions to observations, though this and all other proposed fixes strike experts as a bit contrived.

When I brought up Freedman’s paper, Cyr-Racine seemed unsurprised. “It’s probably 70,” he said of H0 — meaning he thinks early-universe predictions and present-day observations might ultimately converge in the middle, and ΛCDM will turn out to work fine. (He later said he was half kidding.)

In the seminar room, Barry Madore sat down by me and another reporter and asked, “So, where do you think all this is heading?” To the middle, apparently. “You know that song, ‘Stuck in the middle with you?’” he said. “Do you know the lyrics before? ‘Clowns to the left of me, jokers to the right. Here I am, stuck in the middle with you.’”

Another curveball came before lunch. Mark Reid of the Harvard-Smithsonian Center for Astrophysics presented new measurements of four masers — laserlike effects in galaxies that can be used to determine distances — that he had performed in the preceding weeks. Combined, the masers pegged H0 at 74.8, give or take 3.1. Adam Riess took a picture of the slide. Scolnic tweeted, “This week is too much. Go home H0, you’re drunk.”

When I spoke with Riess during the midday break, he seemed overwhelmed by all the new measurements. For several years, he said, he and his SH0ES colleagues had their “necks stuck out” in claiming a discrepancy with Planck’s Hubble constant value. “At that time, it was tension, and it was discrepancy, and, you know, we also got a lot of grief about it,” he said. But in two weeks, he had gone from “feeling fairly lonely” to having three new numbers to consider. Overall, Riess said, “the tension is getting greater because, you know, nobody is coming out below the Planck value.” If it was all a mistake, why didn’t some teams measure an expansion rate of 62 or 65?

As for that 69.8, Riess had questions about Freedman’s method of calibrating the first rung of her distance ladder using TRGBs in the Large Magellanic Cloud. “Now the Large Magellanic Cloud is not a galaxy; it’s a cloud. It’s a dusty, amorphous thing,” Riess said. “This is the great irony of it. They went to TRGBs to escape dust,” but they have to calibrate them somewhere — “that is, they have to pick some TRGBs where they say we know the distance by some other method. And the only place that they have done that in is the Large Magellanic Cloud.”

An hour later, Freedman, looking serene in a flower-print skirt, made her case. “If we put all our eggs in the cepheid basket, we will never uncover the unknown unknowns,” she said.

She explained that she and her colleagues had used TRGBs in the Large Magellanic Cloud as their calibrators because the cloud’s distance has been measured extremely precisely in multiple ways. And they employed a new approach to correct for the effect of dust on the brightness of the TRGBs — one that utilized the stars themselves, leveraging their changes in brightness as a function of color. She noted that her paired TRGBs and supernovas, on the second rung of her distance ladder, show less variation than Riess’s paired cepheids and supernovas, suggesting that her dust measurement may be more accurate.

Freedman stressed during the discussion that better measurements are still needed to rule out systematic errors. “I think that’s where we are,” she said. “That’s just reality.”

From here, the discussion turned into a sparring contest between Freedman and Riess. “Wendy, to answer your question,” Riess said, though she hadn’t asked one, “there have been five fairly independent results presented so far. The dream of getting there is — getting there.”

The Room Where It HappensScolnic, the SH0ES scientist and Riess collaborator, suggested we go outside. We sat on a sunny bench near the peach stucco building. A salty breeze blew in from the Pacific. “Definitely a day unlike any day I’ve experienced,” he said.

H0LiCOW’s new result felt to him like a year ago, what with Freedman’s TRGBs and Reid’s masers. “That’s three big hits all within the last week. And I don’t really know where we stand,” he said. Even if the discrepancy is real, “there’s no good story now which explains everything, on the theory or the observation side. And that’s what makes this so puzzling.”

“In ‘Hamilton’-speak,” he said, “this is the room where it happens right now.”

Freedman appeared from the direction of the bluffs overlooking the ocean.

“Hey, Wendy,” Scolnic said. “Wendy, I was just saying, doesn’t this feel like the room where it happens, in ‘Hamilton’-speak? Like, as a little kid, wouldn’t you want to be in this room?”

“Isn’t this where we want to be?” Freedman said. “We’re working on pretty amazing data. Things that are telling us something about how the universe is evolving.”

“And the numbers are this close; we’re arguing about a few percent,” Scolnic said. “For all the sociological drama, it’s funny that it’s about 3 kilometers per second per megaparsec.”

“You have the right attitude,” Freedman said.

It was time to attend the conference dinner, so they went to figure out how to get back in the building, which was locked after hours.

New PhysicsDay three brought two new measurements of the Hubble constant: A cosmic distance ladder calibrated with “Mira” stars gave 73.6, and galactic surface brightness fluctuations gave 76.5, both plus or minus 4. Adam Riess took more photos, and by the end of the day a plot had been created reflecting all the existing measurements.

The two early-universe predictions studded the left side of the plot, with tight error bars around 67.4. Five late-universe measurements lined up on the right, around 73 or 74. And there in the middle was Freedman’s 69.8, the wrench in the works, the hole in the narrative, the painful conciliatory suggestion that all the measurements might come together in the end, leaving us with the mysteries of ΛCDM and nothing new to say about nature.

Then again, all the late-universe measurements of H0, even Freedman’s, fall to the right of 67.4. Erroneous measurements should come out low as well as high. So maybe the discrepancy is real.

The last speaker, Cyr-Racine, held a vote about what the discrepancy should be called. Most people voted for “tension” or “problem.” Graeme Addison, an expert on baryon acoustic oscillations, said in an email after the conference, “My feeling is that the Hubble discrepancy is a real problem, and that we are missing some important physics somewhere. But the solutions people have put together so far are not super convincing.”

Addison finds the consistency of H0LiCOW and SH0ES especially compelling. And although Freedman’s paper suggested “that uncertainties associated with the SH0ES cepheids may have been underestimated,” he said there are also questions about the TRGB calibration in the Large Magellanic Cloud. Freedman claims to have improved the dust measurement, but Riess and colleagues contest this.

This past Monday, in a paper posted on arxiv.org, Riess and company argued that Freedman and her team’s calibration of TRGBs relied on some low-resolution telescope data. They wrote that swapping it out for higher-resolution data would increase the H0 estimate from 69.8 to 72.4 — in range of SH0ES, H0LiCOW and the other late-universe measurements. In response, Freedman said, “There seem to be some very serious flaws in their interpretation” of her team’s calibration method. She and her colleagues have redone their own analysis using the newer data and, she wrote in an email, “We DO NOT find what are claiming.”

If the four new H0 measurements on the right can’t quite seem to overcome Freedman’s middle value in some people’s minds, it’s due partly to her usual equanimity. Additionally, “she is extremely well respected, and has a reputation for doing meticulous and thorough work,” said Daniel Holz, a Chicago astrophysicist who uses neutron star collisions as “standard sirens,” a promising new technique for measuring H0.

Meanwhile, the next data release from the Gaia space telescope, due in two or three years, will enable researchers to calibrate cepheids and TRGBs geometrically based on their parallax, or how far apart they look from different positions in the sky. The James Webb Space Telescope, Hubble’s successor, will also yield a wellspring of new and better data when it launches in 2021. Cosmologists will know the value of H0 — probably within the decade — and if there is still a discrepancy with predictions, by decade’s end they could be well on their way to discovering why. They’ll know it’s a tension or a crisis and not a surd.

Categories: Science News

All the world is built out of 17 known elementary particles. Carlo Rubbia led the team that discovered two of them. In 1984, Rubbia shared the Nobel Prize in Physics with Simon van der Meer for their “decisive contributions” to the experiment that, the year before, had turned up the W and Z bosons. These particles convey one of the four fundamental forces, called the weak force, which causes radioactive decay.

Rubbia ran the experiment, called Underground Area 1 (or UA1), a bold and ambitious project at CERN laboratory near Geneva that sought traces of W and Z bosons in the chaos of high-energy particle collisions. Hundreds of billions of protons and antiprotons were accelerated close to the speed of light and then smashed together. At the time, antiprotons — which have a habit of rapidly destroying themselves when they come into contact with matter — had never been produced in abundance. Some of Rubbia’s peers favored alternative collider and detector designs, believing that antimatter was too volatile to be controlled in this way.

“We had zillions of different ideas. There was a lot of competition, but you cannot do two things at the same time,” Rubbia said. In the end, UA1 prevailed, and delivered.

More than three decades later, particle physics once again finds itself at a crossroads. A decision looms about which big particle-collider experiment to build next — if indeed one is built at all. While CERN’s Large Hadron Collider (LHC) has performed flawlessly, its collisions have yielded no signs of new particles beyond the expected 17, whose properties and interactions are described by the Standard Model of particle physics. This model makes incredibly accurate predictions about those particles’ behavior, yet it’s also understood to be an incomplete description of our world. It fails to include the gravitational force or dark matter — the mysterious substance that astronomers consider to be about five times more abundant than normal matter — or account for the universe’s matter-antimatter imbalance. Moreover, many theorists feel uneasy about the Standard Model’s inability to explain its own basic truths, such as why there are three families of quarks and leptons, and what determines the particles’ masses.

Rubbia, who at 85 remains at the forefront of the field, isn’t fazed by the absence of “new physics” in the LHC data. He urges his peers to press on in search of more and better data and to trust that answers will come. The Higgs boson — the 17th piece in the Standard Model puzzle — materialized at the LHC in 2012, and now Rubbia wants to explore its characteristics in depth with a state-of-the-art “Higgs factory.”

How best to do this is still up for debate, with competing designs ranging from a circular electron-positron collider 100 kilometers in circumference to a plasma wakefield accelerator, a tabletop experiment in which electrons “surf” on a wave of rapidly accelerating plasma. To Rubbia, the choice is clear: An innovative muon collider, he says, could produce thousands of Higgs bosons in clean conditions at a fraction of the time and cost of other experiments. Muons are simple like electrons but far heavier and thus capable of higher-energy collisions. Critics say such a machine is still far beyond our current technical abilities. But while it may be a technological moonshot, a muon collider offers the prospect of a precision instrument that could also potentially turn up evidence of new particles beyond the Standard Model’s.

Rubbia has spent most of his long career at CERN, including a five-year stint as director-general beginning in 1989. He has also taken a leadership role at Gran Sasso National Laboratory in his native Italy, which is looking for signs of the decay of the proton. (If seen, this would also offer clues about physics beyond the Standard Model.) An engineer and constant inventor, Rubbia has spent part of the last three decades pursuing radically novel energy sources — such as a nuclear power reactor that is driven by a particle accelerator.

*Quanta* caught up with Rubbia last month at the 69th Lindau Nobel Laureate Meeting in Germany. There, he addressed hundreds of young scientists from around the world, making the case for a muon collider as the best bet for learning more about the universe’s fundamental building blocks. A sharply dressed man with piercing blue eyes, he spoke with zeal, both onstage and off. The interview has been condensed and edited for clarity.

Ha! I’ve never heard such a question! Particle accelerators are an essential part of the scientific program, which is fundamentally curiosity driven. And the discovery of the W and the Z bosons was one conclusion in the very long history of particle physics. There are particles of matter, like quarks and leptons, and those were reasonably well settled experimentally, but the question of the forces — that is, the particles which mediate the interactions between particles of matter — was something yet to be understood.

Now, the W and Z were postulated and discussed by many people, but the experimental realization required very high energies — for the time, at least. Don’t forget these fundamental choices are coming from nature, not from individuals. Theorists can do what they like, but nature is the one deciding in the end.

So how did you create such high energies?We first of all had to learn how to construct a colliding beam machine instead of having a single accelerator. And so we modified an existing circular accelerator at CERN so that particles and antiparticles could be injected.

The question of accumulating antiprotons was a serious problem because antiprotons were barely discovered at Berkeley some years before — and they only made a handful of particles. Here we needed to make a hundred billion particles every morning. Not only that, but we had to cool them by an enormous factor so they could fit inside the circular accelerator. I have to acknowledge the enormous help and support of Léon Van Hove, John Adams and others; without them it would’ve been impossible.

After a few years, we eventually entered the energy domains of the W and Z, and indeed they were there! However the proton-antiproton collisions were very complicated collisions; a lot of other interactions were happening, so we went further. We transformed the CERN facilities from a proton-antiproton collider into an electron-positron collider, and we built a new ring: the 27-kilometer Large Electron-Positron Collider experiment. This produced tens of thousands of W and Z bosons, all in perfect, clean conditions. It led to more Nobel Prizes and completed the story of the W and the Z. Now of course, there’s one more piece missing, which is the Higgs boson.

But haven’t we already found the Higgs?Yes, that was six years ago. The question now is: How can we produce Higgs bosons abundantly, in clean conditions? And that is requiring novelty.

The Higgs is the first and only scalar particle — meaning it has only size and no direction — that we have amongst the basic forces of nature. Every particle has a different story, and therefore this has to be studied and understood on its own. Unlike the other forces, the Higgs field has no preferred direction and looks the same when you reflect it in a mirror. Understanding it is at least as important as the observation of the W and the Z, and this will conclude the story of the elementary particles in the Standard Model.

Not everyone agrees that time and resources should be focused on a “Higgs factory”; they say pushing to the next energy frontier to search for new particles should be a priority.You could build a circular machine three times the size of the Large Hadron Collider to collide electrons and positrons; you could upgrade the LHC, or even build a next-generation linear accelerator. Probing higher energies offers the hope of new physics — it could be supersymmetry, it could be something else, I don’t know what. But before exploring higher energies, it makes sense to me to build a muon collider, and to clarify the question of the Higgs first. Here we already have a particle that we want to explore. We may even find signs of new physics by studying the Higgs very precisely. For that we don’t need to go to a 100-kilometer-around tunnel. Think about how many days it takes to walk 100 kilometers! And it all has to be extremely functional, every single piece has to work — it’s a miracle if people succeed in making it work.

Remember, in the 1950s, Enrico Fermi said that by the year 2000, the accelerator ring would have the circumference of the Earth. It’s an absurd statement, of course, but there’s a point: Do we direct resources for the realization of mastodonic, gigantic devices, which might be achieved but will take 20 to 30 years? It took us 10 billion euros and 20 years to discover the Higgs particle. And so if you want to go further, it’s going to be more costly and more complicated.

There’s no doubt that there’s a solution: creating muon pairs. A muon experiment is a small ring, which is one-hundredth the size of the LHC. It can be done in existing accelerators: Both CERN and the European Spallation Source can produce enough protons to make a sufficient number of muons.

You make it sound easy! Isn’t it still very difficult to create a narrow beam of muons —that is, to “cool” them — so that they can be used in an existing particle collider?Yes, there are huge challenges, but in my view there is nothing major which represents substantial risk. We have proposed a so-called initial cooling experiment which is done on a very small scale, in which we’ll start to build all the basic ideas. It requires a lot of tests and verification of the behavior of muon cooling, but it can be done in a few years.

Then to go from that to a big machine is something which can be done with conventional technologies. And it can be done with relatively small cost — by smart people of course — in a reasonably short time. There’s a lot of work to do, but what’s wrong with doing new work to improve things?

Given that no new particles — aside from the anticipated Higgs boson — showed up at the LHC, what are the chances that the next accelerator will uncover new physics?Actually, there are other interesting experiments, not just accelerators. The neutrino experiments going on at the South Pole, for example, are becoming a new alternative to making bigger and more complex accelerator systems. And I think competition between the two is very productive; it will create the results of years to come.

I’m a bit concerned that the future of particle physics at CERN does not involve, so far, any new alternative after the termination of the LHC program. When I was responsible for the activities at CERN, whenever we had one machine, we had the next one coming. We need to have more courage, and collectively agree on alternatives.

How did growing up under fascism and living through the Second World War influence your life?You don’t know what Europe was then. When I was 4 years old, I remember I had a radio that my father had constructed — radios then were very complicated systems with antennas and everything else. And we heard Hitler shouting on the radio. And then the war came: 88 million people were killed in a few years. We were all exposed to this terrible situation, a tremendous thing which left us completely at zero — and then we had to rebuild.

Yet you seem to take an optimistic attitude in all your work.Oh yes, I’m very optimistic. You couldn’t afford not to be optimistic. Optimism was the most important thing after the vicissitudes of such a complicated history, and Europe has advanced tremendously since then. The integration of Europe through science is phenomenal. This is a very important success, passing from single countries — hating each other, fighting each other, at war with each other — to a situation where there is a complete consensus within Europe.

You spent a couple of years doing research in the United States before moving to CERN in 1960. What drew you back to Europe?I wanted to work at CERN because I was not ready to totally give up my European nature and become an American citizen. A lot of European colleagues of mine moved to America and became lawful, perfectly acceptable American citizens. But I like to do things my own way, and so I wanted to come back, because I felt that Europe was a place where progress was possible. Indeed, particle science over the last few decades has been European.

Aside from particle physics, you’ve also been heavily involved in pursuing sustainable energy technologies. Why is that?During my lifetime the population of the planet has gone up by a factor of three and a half, but the primary energy use has gone up by a factor of twelve — and there’s no way that the children who are born today will afford another twelve times on top of us. And so sustainable energy is a crucial problem, and I find it very exciting to see that there are new methods which can allow us to solve it.

What novel method would you put your money on?We have renewable energies and fossil fuels, a little bit of nuclear too. Out of these, natural gas is extremely abundant: We have normal gas, but we also have shale gas, and we have clathrates — which maybe you don’t know about. They are found in the depths of the ocean, a combination of natural gas and water, and they are ten times more abundant than conventional natural gas. And so we have enough natural gas for thousands of years.

Now of course, natural gas produces carbon dioxide, and that’s the biggest worry that everyone has today. So how can you go ahead and produce these things without carbon dioxide emissions? Well, we’ve developed methods which allow us to prevent carbon dioxide emissions by taking the methane and, instead of producing carbon dioxide, producing black carbon and hydrogen. In principle this decomposition occurs at a very high temperature, about 2,000 degrees Celsius, which is impossible to use, but with our new methods with these new metals it can be done at 1,000 degrees.

You often seem to push for leap-change technologies.Experimental physics is founded on curiosity-driven observation. You should never do what other people do. You have to do something unique because it allows you to make your own mistakes and to modify things — to change your mind 25 times before coming to the right solution. This is the kind of pragmatic way I have of proceeding in this field.

Here at Lindau you’re speaking to hundreds of young scientists about your work and your ideas for the future. What do you hope to pass on to them?Young scientists have enough to do themselves without my opinion! We had the right to drive our own future, and they will have the right to master their own future. I can listen to them, but it’s not up to me to tell them what to do.

Given the current crisis in particle physics, and the huge hurdles humanity faces in creating a sustainable future, are you still an optimist?I’m as optimistic today as I was in the past. The discussion is complicated, the choices are difficult, but so far over my long lifetime I’ve always seen the results to be positive. And so I’m sure a solution will also be found this time.

Categories: Science News

“It’s very easy to break things in biology,” said Loren Frank, a neuroscientist at the University of California, San Francisco. “It’s really hard to make them work better.”

Yet against the odds, researchers at the New York University School of Medicine reported earlier this summer that they had improved the memory of lab animals by tinkering with the length of a dynamic signal in their brains — a signal that has fascinated neuroscientists like Frank for decades. The feat is exciting in its own right, with the potential to enhance recall in people someday, too. But it also points to a more comprehensive way of thinking about memory, and it identifies an important clue, rooted in the duration of a neural event, that could pave the way to a greater understanding of how memory works.

Since the 1980s, scientists have been tuning in to short bursts of synchronized neural activity in the brain area called the hippocampus. The activity consists of complex, cascading electrical patterns that, when recorded, “sound like an explosion,” said Shantanu Jadhav, a neuroscientist at Brandeis University. Since their discovery, these “sharp wave ripples” have been associated with memory because they arise when neurons suddenly replicate their prior firing patterns in an accelerated surge, as if quickly replaying snippets of a previous experience. They do so when an animal is asleep, presumably to consolidate newly gained knowledge for long-term storage.

Over time, it has become clear that sharp wave ripples aren’t just a signature of passive memory consolidation; they’re also involved in more active memory-based processes, such as memory retrieval and the use of memories to guide new inferences. The hippocampal fireworks that dominate nap time also happen frequently in animals that are awake but idling and inattentive, or on the cusp of a decision, or exploring a new environment. “The information content in ripples can be a true replay of something that previously happened,” said Brad Pfeiffer, a neuroscientist at the University of Texas Southwestern Medical Center, “or it can be an imagination of something that might happen in the future.”

That is, sharp wave ripples “looked like what you’d imagine a memory looks like in the brain … or like explorations of possible future experiences” based on past ones, Frank said. Either way, the ripples “seemed well set up to be an important part of what drives learning in the brain, a critical aspect of the brain learning new things” — a “cognitive biomarker” for the many forms that memory can take.

Frank, Jadhav and their colleagues cemented that idea in 2012, when they used electrical pulses to disrupt sharp wave ripples in rats learning to navigate a three-pronged maze that resembled the head of a trident. When the animal was on one of the outer arms of the maze, it had to return to the center to get a reward; when it was on the middle arm, it had to go left if it had previously chosen to go right, and right if it had previously chosen to go left. The ripple interference had no effect on the rats’ performance in the first part of the task, when they only needed to go back to their starting point. But it caused a marked decline in their ability to alternate which path they took to the outer arms — the part of the task that required them to remember their earlier decision to go right or left.

Now, researchers led by the neuroscientist György Buzsáki have finally offered up proof positive that sharp wave ripples play a part in memory: Artificially prolonging the ripples in rats improved their performance on the same memory task that Frank’s group had used. The work was published in *Science *in June.

“I think the fact that they came up with a way to speed up learning by amplifying an actual existing pattern of activity was really creative and effective,” Frank said, “and really helps us make stronger associations between this pattern and learning and memory.”

But the achievement also highlights the importance of a feature of the ripples that researchers hadn’t really considered: their length. Scientists had previously observed that most naturally occurring ripples in the hippocampus lasted only about one-tenth of a second, although a small percentage of them persisted beyond that — but they hadn’t probed into whether the duration was relevant, or what it might be relevant for. “People knew the sharp wave ripples had different lengths, but I think they previously assumed it was random,” said A. David Redish, a neuroscientist at the University of Minnesota who did not participate in the study.

In fact, this kind of skewed distribution has appeared across multiple scales in the brain — in the firing rates of neurons, in the strength of synapses, and in the conduction velocity of axons, for example. Experts generally think that “it allows for a balance between competing requirements in dynamic systems,” which might facilitate greater stability or robustness, Buzsáki said. But he and his team decided to dig deeper into what that meant for sharp wave ripples in particular.

When the rats entered a new environment — when they first started navigating the three-pronged maze, for instance — the researchers noticed that the number of long ripples tended to be greater than usual. As the animals were repeatedly exposed to the same environment, the ripples lasted for less and less time. Other experimental comparisons led to the same conclusion: that longer ripples might be associated with tasks that require greater memory or cognitive load. “It’s almost like there’s some internal process that says, ‘I need longer sharp wave ripples,’” Redish said.

When Buzsáki and his colleagues stimulated the brain with light to make the ripples last longer, they found that other, related neurons joined the pattern. The hippocampus seemed to be replaying more of the sequence — in this case, reflecting more information about the rat’s previous route through the maze, “displaying the entire journey,” Buzsáki said.

“Extending the ripples actually lengthens the trajectories being reactivated,” Jadhav said. “There’s a strong possibility that this is a mechanism for imagining all of the available paths” the rat then has to choose from.

All in all, this shows that “the length of the sharp wave ripple matters,” Redish said. “It’s related to the information processed within it, and sharp wave ripples that are processing more information are more necessary for memory.”

Now, researchers can go back to their previous work to see if the length of ripples offers a new perspective on memory mechanisms. Redish, for one, had previously found that sharp wave ripples characterized the neural states of rats making new mental connections; in hindsight, he said, those ripples had also been longer. “So it could be that these longer sharp wave ripples are also involved in a better linking function.”

Yet such a possibility also raises a variety of other questions — what shorter ripples might be doing, what the ripples’ length might have to do with more distant memories or with future planning, and whether long and short ripples interact differently with brain regions outside of the hippocampus. (Buzsáki is now starting to pursue that last line of investigation.)

The focus on sharp wave ripples — and the task that Frank and Buzsáki have used to study it — is also intriguing because it encompasses many different ways of thinking about memory. “We have all these distinctions, that memory is the past, that planning and imagination are the future,” that one brain region is responsible for active short-term memory and another for “offline” long-term memory, Buzsáki said. “But the brain might not work this way. These things are not so easy to separate.”

Perhaps there’s an argument to be made, then, for studying memory “in more general terms,” he added.

When it comes to how to define memory and how best to study it, “I don’t necessarily think as a field we know what we’re doing yet,” Frank said. “We’re still stumbling in the dark.” But in these bursts of ripple activity, perhaps we will find some illumination.

Categories: Science News

Karel Říha’s mutant plants were too healthy. The molecular biologist, a postdoctoral researcher in Texas in the year 2000, was breeding botany’s leading model organism,* Arabidopsis*, an unremarkable weed in the mustard family. He had carefully chosen a strain with a mutation that robbed the plants of their ability to repair the caps at the end of their DNA. With every cell division, these protective telomeres grew shorter, hastening the plants’ inevitable genetic meltdown.

But two months became six, and successive generations of *Arabidopsis* continued to grow like the weeds they were. “It took me over a year to get the first signs of problems with telomeres,” said Říha, who now runs a lab at CEITEC (the Central European Institute of Technology) at Masaryk University in the Czech Republic. “This was the first sign there was something strange going on.”

The mutants were aging too slowly, and a decade would pass before he could figure out why. In recent years, Říha and other biologists have scrutinized how plants grow at the cellular level; some of the important details are unexpectedly murky. Even though the plants appear to make flowers from normal tissue, they may somehow be blocking mutations in body cells from getting into future generations — a strategy typically associated with animals. If it pans out, the controversial finding could overturn a century of conventional wisdom about how the plant kingdom evolves, lives and dies.

Cells accumulate genetic damage throughout their lives from environmental wear and tear, and especially from mishaps that occur while DNA is being copied for cell division. During the 19th century, even before DNA was recognized as the material of heredity, the German evolutionary biologist August Weismann theorized that organisms might protect their genetic heritage by establishing a “germline” tissue, kept isolated from exposure and replication. Today, biologists know that most animals earmark the cells they’ll someday use for procreation early, squirreling them away for safekeeping in what will become the reproductive organs. Decades of sunlight might degrade your skin’s DNA, but since you don’t make babies with your elbow, that damage goes with you to the grave.

Plants, however, are different. Reproductive structures, such as flowers or cones, can sprout from just about anywhere on a plant, which led botanists to conclude that plants don’t produce egg analogues until late in the game, just before reproducing. The cells eventually earmarked for reproduction are presumably drawn from standard body tissues, which means that any mutations arising in those cell lineages during the organism’s life could pass into the next generation, according to this thinking.

To some biologists, plants’ apparent lack of a germline seems like an advantageous feature that lets plant populations test out many mutations to find the ones that best match their changing environments. To others, however, it’s a puzzling bug that clashes with the observed longevity of many trees. If all the mutations in ancient, towering trees can live on, the trees should have trouble producing generation after generation of healthy seedlings. Now, a handful of researchers seek to shift the debate back to its foundational assumption: What if plants do have a mutation-throttling germline after all?

A few years ago, Robert Lanfear, who studies molecular evolution at the Australian National University, was mulling over the surprisingly low number of mutations that had turned up in the body cells of eucalyptus plants during a genomic analysis. Wondering how the plants might be keeping mutations in check, he decided to track down the origins of the no-germline idea. When he found that the supporting evidence was thin, he summarized his findings in a 2018 article for *PLOS Biology*.

“This took me at least 18 months of reading hundreds of papers, some dating all the way back to the 1920s,” he said. “I do think we were standing on shakier ground than we realized.”

One particular pair of results from 2016 spurred Lanfear to write his paper. If branches, like flowers, can grow from any cell, then the number of cell divisions between two points on a plant should depend on the amount of material between them. However, by painstakingly tracking the microscopic splitting of cells, a team from the University of Bern in Switzerland found that *Arabidopsis* and tomato branches, at least, are a bit pickier.

All plant cells ultimately derive from a crown of hundreds of cells sitting at the head of the plant’s main stalk — the apical meristem. Botanists have known for decades that slowly dividing stem cells in the center drive growth by producing daughter cells, which replicate faster on the periphery, pushing the meristem higher and eventually becoming leaves, branches and flowers. But it was unclear exactly how plants choose which cells should start these organs.

The Bern group’s tracing efforts suggested that new branches sprout directly from the stem cells, not from specialized leaf or branch cells. Just seven to nine cell divisions separated one branch from the next, regardless of how long the branch was or how big the plant grew. At that rate, the central stem cells at the tip of the furthest branch would have divided just dozens of times over the plant’s entire life span, a tiny figure compared with the organism’s trillions of cells. “That’s pretty weird,” Lanfear said. “The number is a lot lower than anyone thought.”

Months later, Říha’s *Arabidopsis* results finally came to light, doing for life span what the tomato research had done for growth. Skeptical reviewers had rejected the article in 2011, but the winds shifted after the Bern research. Following up on Říha’s Texas observations, his team had raised *Arabidopsis* mutants under normal and extended-light conditions and measured the telomere deterioration from one generation to the next as a proxy for how many times the plants had copied their DNA during their lifetime. Despite living three times longer than extended-light individuals, the normal-light *Arabidopsis* passed on DNA with telomeres only about 15% shorter — a much smaller reduction than the difference in their longevity had implied.

Říha finally understood why his plants in Texas had thrived for so many generations: Whatever cell lineage was shepherding *Arabidopsis*’s reproductive DNA, it was dividing at a glacial pace that had little to do with size or life span. In light of these results and those of the Bern group, the slowly dividing stem cells in the apical meristem seemed like prime candidates for the lineage in question. Říha now suspects that the organization of the apical meristem, with a few cells at the top dividing only when absolutely necessary while daughter cells below account for the lion’s share of the plant’s growth, may have evolved to safeguard reproductive DNA.

The strategy isn’t exactly the same as sealing off eggs in ovaries, as animals do, but keeping stem cells relatively dormant in the apical meristem might bring the same benefit — while body cells divide like crazy, accumulating errors, a few quiet stem cells keep their DNA clean for eventual reproduction. “It’s not like there is a strict segregation, which would put aside the germline very early and keep it aside,” Říha said. “It’s more like a pool of cells that are protected or shielded from extensive replication.”

Some biologists are starting to consider this tranquil pool of cells a “functional germline,” in a twist on a mid-20th-century French theory called the *méristème d’attente*, or “waiting meristem,” which suggested that plant embryos sequester germ cells early like many animals, and that the cells remain perfectly silent in the meristem until flowering. Botanists eventually discredited the theory by proving that all embryonic cells do divide as a plant grows, but Lanfear thinks the rejection went too far. There’s a lot of room, he says, between the extremes of setting germ cells aside immediately so they never divide again, and frenetic division all the way to reproduction. What’s more, he says, germline handling and timing vary from animal to animal, and expecting that plants all do it the same way would be naive.

But does the functional germline actually correlate with depressed mutation rates? Recent studies suggest it might. Trees send out branches as they grow taller, so the lower branches serve as snapshots of the plant’s genome when it was young. And affordable gene sequencing techniques are letting researchers assemble slideshows that reveal how DNA changes as trees age.

Struck by the majesty of an old oak on the University of Lausanne’s campus in Switzerland, Philippe Reymond, a molecular biologist, wondered how many mutations its upper reaches had accumulated after more than 200 years of growth under the sun. Sequencing the DNA of leaves from the oak’s bottom and top, his group used algorithms to search the genetic codes for typos — an effort he compares to copyediting two stacks of 100 Oxford English Dictionaries. In the end, they estimated that the uppermost leaves had accumulated no more than three dozen mutations — only one-tenth of what Reymond had expected based on the *Arabidopsis* research.

A functional germline with both genetic and physical shielding could explain how an ancient tree keeps itself so healthy genetically. “There must be a genetic control on why a few cells don’t divide — a strict genetic program,” Reymond said. “These layers of leaves could also be protection against UV light” for the stem cells beneath them.

*Arabidopsis*, tomatoes and oaks all belong to the flowering-plant group called angiosperms, but the first mutation rate estimated for individual conifer trees, published in June, tells a similar story. A team from the University of British Columbia (UBC) had professional tree climbers collect needles and bits of trunk from 200- to 500-year-old Sitka spruce trees. Sequencing select genome sections, they found a per-year mutation rate lower than that of many animals. Assuming that spruces botch DNA replication about as often as *Arabidopsis* does, the group concluded that the trees’ stem cells may lie quiet for years between divisions. “That is akin to a germline,” said Sally Aitken, a co-author and a professor of forestry at UBC.

Some, however, have reservations. Sarah Otto, an evolutionary biologist at UBC who also worked on the conifer study, points out that stem cells may have evolved to divide slowly for reasons unrelated to controlling mutations. A shoot apical meristem that remains small and quiet, for instance, might be the only type that stays balanced above the comparatively explosive growth of the plant below. To her, a more in-depth molecular analysis of the stem cells would be more convincing proof.

Laurence Hurst, an evolutionary geneticist at the Milner Center for Evolution at the University of Bath, also counsels caution because it’s hard to compare mutation rates across groups with different populations, life spans and sizes. To get to the root of mutations within individuals, he recently co-authored a paper, published in April in *PLOS Biology*, comparing hundreds of genomes taken from different parts of plants from the same eight species. “We’re trying to strip away all those variables,” he said. “It’s all from the same damn plant.”

Hurst finds Lanfear’s hypothesis intriguing, however, in part because one of the experiments in Hurst’s recent study goes a step beyond the functional germline idea, revealing the most suggestive evidence yet for an honest-to-goodness, animal-style separated germline.

Strawberries reproduce by putting out runners that grow into new plants when they hit the ground. Since these runners carry the DNA for future strawberry plants, germline theory would insist that the plant must keep runner mutations low. But these runners had twice the mutations of the reproductively inconsequential leaves. “It really had us scratching our heads,” Hurst said.

After carefully tracing the five mutations, however, the team found that only one made it into each of the offspring plants, and always the same one. The odds of each consecutive plant randomly drawing the same mutation were less than 1 in 1,000, Hurst calculated, suggesting another interpretation.

“What we’ve got is the most beautiful cell line tracer by accident,” he said. “This must be sitting in something like a germline.” If strawberries physically separate the cells in their runners, dedicating molecular resources to keeping the mutation rate low in the cells destined for reproduction, they could ignore the rest of the runner without many ill effects for their progeny.

“I think this is the best evidence that anyone’s ever come up with that some plants really do have a germline,” Hurst said. Next, he plans to get some runner samples under a microscope and look for physical differences.

Lanfear finds such results exciting but stresses that they are preliminary. At this point, most experiments involve small groups of plants and were not explicitly designed to search for plant germlines. Numerous technical caveats also complicate interpretation of the genetic results. “None of this evidence is totally bulletproof,” he said.

Putting the functional germline hypothesis on firmer ground calls for studying how the apical meristem divides, measuring mutation rates in more species, and tracking how many mutations make it into the next generation. Proving a separated germline will be even tougher. Ideally, researchers would trace a plant’s entire cell lineage and look for one meristem population that leads only to flowers, but plants are long lived, opaque and developmentally plastic, and the meristem’s leaf covering hides it from scientists as well as ultraviolet rays.

Despite the challenges, Lanfear thinks upcoming work will go a long way toward resolution. “I wouldn’t be surprised,” he said, “if within the next five years we have some pretty good answers to this question in lots of different species.”

Since germlines represent the genetic gates between generations, understanding to what degree species harness them could illuminate evolutionary trends across swaths of the tree of life. Hurst studies plants to test what he calls the “future shadow effect,” a hypothesis that the more important a stretch of DNA will be to future generations, the more an organism works to keep it safe. “The germline is just a crystallization of that,” he said.

Biologists have long puzzled over why lifetime cancer rates hold somewhat constant across species from mice to elephants, even though the latter has so many more cells to go haywire. Plant cancers don’t metastasize because their cells don’t move, but random mutations cause harm in other ways. It stands to reason, then, that our leafy companions have developed techniques to keep their genetic machinery humming smoothly enough to allow them to grow so large. When it comes to managing their genes, every species seeks a balance between thrifty corner cutting and costly perfectionism. The question now is how much our respective strategies may overlap.

“The more we study plants, the more we find there are either similar strategies or similar findings in humans or animals,” Reymond said. “They’re different solutions but maybe for the same reasons or the same goal.”

Categories: Science News

21st Century Wire | Activist Post | African Globe | Anti-Media | Anti-War | Bartlett, Eva Karene | Black Agenda Report | Canadian | Canadian Dimension | The Canary | Consortium News | CounterPunch | The Duran | Jimmy Dore | Electronic Intifada | Engler, Yves | Fort Russ | Ghion Journal | Global Research | Grayzone, The | Greanville Post | Johnstone, Caitlin | Lascaris, Dimitri | Left Chapter | Media Lens | Middle East Observer | MintPress News | Moon of Alabama | Murray, Craig | Naked Capitalism | Native News Online | Off-Guardian | Oped News | People's Voice | Popular Resistance | Quanta Magazine | Rabble News | Redacted Tonight | Real News, The | RealClearInvestigations | RT | Science News | South Front | Strategic Culture | TomDispatch | TruthDig | The Tyee | Vineyard of the Saker | Washington's Blog | Voltaire dotNet | World Socialist Website | Zerohedge