Einstein’s latest anniversary marks the birth of modern cosmology

First of two parts

Sometimes it seems like every year offers an occasion to celebrate some sort of Einstein anniversary.

In 2015, everybody lauded the 100th anniversary of his general theory of relativity. Last year, scientists celebrated the centennial of his prediction of gravitational waves — by reporting the discovery of gravitational waves. And this year marks the centennial of Einstein’s paper establishing the birth of modern cosmology.

Before Einstein, cosmology was not very modern at all. Most scientists shunned it. It was regarded as a matter for philosophers or possibly theologians. You could do cosmology without even knowing any math.

But Einstein showed how the math of general relativity could be applied to the task of describing the cosmos. His theory offered a way to study cosmology precisely, with a firm physical and mathematical basis. Einstein provided the recipe for transforming cosmology from speculation to a field of scientific study.

“There is little doubt that Einstein’s 1917 paper … set the foundations of modern theoretical cosmology,” Irish physicist Cormac O’Raifeartaigh and colleagues write in a new analysis of that paper.

Einstein had pondered the implications of his new theory for cosmology even before he had finished it. General relativity was, after all, a theory of space and time — all of it. Einstein’s showed that gravity — the driving force sculpting the cosmic architecture — was simply the distortion of spacetime geometry generated by the presence of mass and energy. (He constructed an equation to show how spacetime geometry, on the left side of the equation, was determined by the density of mass-energy, the right side.) Since spacetime and mass-energy account for basically everything, the entire cosmos ought to behave as general relativity’s equation required.

Newton’s law of gravity had posed problems in that regard. If every mass attracted every other mass, as Newton had proclaimed, then all the matter in the universe ought to have just collapsed itself into one big blob. Newton suggested that the universe was infinite, filled with matter, so that attraction inward was balanced by the attraction of matter farther out. Nobody really bought that explanation, though. For one thing, it required a really precise arrangement: One star out of place, and the balance of attractions disappears and the universe collapses. It also required an infinity of stars, making it impossible to explain why it’s dark at night. (There would be a star out there along every line of sight at all times.)

Einstein hoped his theory of gravity would resolve the cosmic paradoxes of Newtonian gravity. So in early 1917, less than a year after his complete paper on the general theory was published, he delivered a short paper to the Prussian Academy of Sciences outlining the implications of his theory for cosmology.
In that paper, titled “Cosmological Considerations in the General Theory of Relativity,” he started by noting the problems posed by using Newton’s gravity to describe the universe. Einstein showed that Newton’s gravity would require a finite island of stars sitting in an infinite space. But over time such a collection of stars would evaporate. That problem could be avoided, though, if the universe turned out not to be infinite. Instead, Einstein said, everything would be fine if the universe is finite. Big, sure, but curved in such a way that it closed on itself, like a sphere.

Einstein’s mathematical challenge was to show that such a finite cosmic spacetime would be static and stable. (In those days nobody knew that the universe was expanding.) He assumed that on a large enough scale, the distribution of matter in this universe could be considered uniform. (Einstein said it was like viewing the Earth as a smooth sphere for most purposes, even though its terrain is full of complexities on smaller distance scales.) Matter’s effect on spacetime curvature would therefore be pretty much constant, and the universe’s overall condition would be unchanging.

All this made sense to Einstein because he had a limited view of what was actually going on in the cosmos. Like many scientists in those days, he believed the universe was basically just the Milky Way galaxy. All the known stars moved fairly slowly, consistent with his belief in a spherical cosmos with uniformly distributed mass. Unfortunately, general relativity’s math didn’t work if that was the case — it suggested the universe would not be stable. Einstein realized, though, that his view of the static spherical universe would succeed if he added a term to his original equation.

In fact, there were good reasons to include the term anyway. O’Raifeartaigh and colleagues point out that in his earlier work on general relativity, Einstein remarked in a footnote that his equation technically permitted the inclusion of an additional term. That didn’t seem to matter at the time. But in his cosmology paper, Einstein found that it was just the thing his equation needed to describe the universe properly (as Einstein then supposed the universe to be). So he added that factor, designated by the Greek letter lambda, to the left-hand side of his basic general relativity equation.

“That term is necessary only for the purpose of making possible a quasi-static distribution of matter, as required by the fact of the small velocities of the stars,” Einstein wrote in his 1917 paper. As long as the magnitude of this new term on the geometry side of the equation was small enough, it would not alter the theory’s predictions for planetary motions in the solar system.

Einstein’s 1917 paper demonstrated the mathematical effectiveness of lambda (also called the “cosmological constant”) but did not say much about its physical interpretation. In another paper, published in 1918, he commented that lambda represented a negative mass density — it played “the role of gravitating negative masses which are distributed all over the interstellar space.” Negative mass would counter the attractive gravity and prevent all the matter in Einstein’s spherical finite universe from collapsing.

As everybody now knows, though, there is no danger of collapse, because the universe is not static to begin with, but rather is rapidly expanding. After Edwin Hubble had established such expansion, Einstein abandoned lambda as unnecessary (or at least, set it equal to zero in his equation). Others built on Einstein’s foundation to derive the math needed to make sense of Hubble’s discovery, eventually leading to the modern view of an expanding universe initiated by a Big Bang explosion.

But in the 1990s, astronomers discovered that the universe is not only expanding, it is expanding at an accelerating rate. Such acceleration requires a mysterious driving force, nicknamed “dark energy,” exerting negative pressure in space. Many experts believe Einstein’s cosmological constant, now interpreted as a constant amount of energy with negative pressure infusing all of space, is the dark energy’s true identity.

Einstein might not have been surprised by all of this. He realized that only time would tell whether his lambda would vanish to zero or play a role in the motions of the heavens. As he wrote in 1917 to the Dutch physicist-astronomer Willem de Sitter: “One day, our actual knowledge of the composition of the fixed-star sky, the apparent motions of fixed stars, and the position of spectral lines as a function of distance, will probably have come far enough for us to be able to decide empirically the question of whether or not lambda vanishes.”

Hawk moths convert nectar into antioxidants

Hawk moths have a sweet solution to muscle damage.

Manduca sexta moths dine solely on nectar, but the sugary liquid does more than fuel their bodies. The insects convert some of the sugars into antioxidants that protect the moths’ hardworking muscles, researchers report in the Feb. 17 Science.

When animals expend a lot of energy, like hawk moths do as they rapidly beat their wings to hover at a flower, their bodies produce reactive molecules, which attack muscle and other cells. Humans and other animals eat foods that contain antioxidants that neutralize the harmful molecules. But the moths’ singular food source — nectar — has little to no antioxidants.

So the insects make their own. They send some of the nectar sugars through an alternative metabolic pathway to make antioxidants instead of energy, says study coauthor Eran Levin, an entomologist now at Tel Aviv University. Levin and colleagues say this mechanism may have allowed nectar-loving animals to evolve into powerful, energy-intensive fliers.

Immune cells play surprising role in steady heartbeat

Immune system cells may help your heart keep the beat. These cells, called macrophages, usually protect the body from invading pathogens. But a new study published April 20 in Cell shows that in mice, the immune cells help electricity flow between muscle cells to keep the organ pumping.

Macrophages squeeze in between heart muscle cells, called cardiomyocytes. These muscle cells rhythmically contract in response to electrical signals, pumping blood through the heart. By “plugging in” to the cardiomyocytes, macrophages help the heart cells receive the signals and stay on beat.
Researchers have known for a couple of years that macrophages live in healthy heart tissue. But their specific functions “were still very much a mystery,” says Edward Thorp, an immunologist at Northwestern University’s Feinberg School of Medicine in Chicago. He calls the study’s conclusion that macrophages electrically couple with cardiomyocytes “paradigm shifting.” It highlights “the functional diversity and physiologic importance of macrophages, beyond their role in host defense,” Thorp says.

Matthias Nahrendorf, a cell biologist at Harvard Medical School, stumbled onto this electrifying find by accident.

Curious about how macrophages impact the heart, he tried to perform a cardiac MRI on a mouse genetically engineered to not have the immune cells. But the rodent’s heartbeat was too slow and irregular to perform the scan.
These symptoms pointed to a problem in the mouse’s atrioventricular node, a bundle of muscle fibers that electrically connects the upper and lower chambers of the heart. Humans with AV node irregularities may need a pacemaker to keep their heart beating in time. In healthy mice, researchers discovered macrophages concentrated in the AV node, but what the cells were doing there was unknown.
Isolating a heart macrophage and testing it for electrical activity didn’t solve the mystery. But when the researchers coupled a macrophage with a cardiomyocyte, the two cells began communicating electrically. That’s important, because the heart muscle cells contract thanks to electrical signals.

Cardiomyocytes have an imbalance of ions. While in the resting state, there are more positive ions outside the cell than inside, but when a cardiomyocyte receives an electrical signal from a neighboring heart cell, that distribution switches. This momentary change causes the cell to contract and send the signal on to the next cardiomyocyte.

Scientists previously thought that cardiomyocytes were capable of this electrical shift, called depolarization, on their own. But Nahrendorf and his team found that macrophages aid in the process. Using a protein, a macrophage hooks up to a cardiomyocyte. This protein directly connects the inside of these cells to each other, allowing macrophages to transfer positive charges, giving cardiomyocytes a boost kind of like with a jumper cable. This makes it easier for the heart cells to depolarize and trigger the heart contraction, Nahrendorf says.

“With the help of the macrophages, the conduction system becomes more reliable, and it is able to conduct faster,” he says.

Nahrendorf and colleagues found macrophages within the AV node in human hearts as well but don’t know if the cells play the same role in people. The next step is to confirm that role and explore whether or not the immune cells could be behind heart problems like arrhythmia, says Nahrendorf.

Long naps lead to less night sleep for toddlers

Like most moms and dads, my time in the post-baby throes of sleep deprivation is a hazy memory. But I do remember feeling instant rage upon hearing a popular piece of advice for how to get my little one some shut-eye: “sleep begets sleep.” The rule’s reasoning is unassailable: To get some sleep, my baby just had to get some sleep. Oh. So helpful. Thank you, lady in the post office and entire Internet.

So I admit to feeling some satisfaction when I came across a study that found an exception to the “sleep begets sleep” rule. The study quite reasonably suggests there is a finite amount of sleep to be had, at least for the 50 Japanese 19-month-olds tracked by researchers.

The researchers used activity monitors to record a week’s worth of babies’ daytime naps, nighttime sleep and activity patterns. The results, published June 9, 2016, in Scientific Reports, showed a trade-off between naps and night sleep. Naps came at the expense of night sleep: The longer the nap, the shorter the night sleep, the researchers found. And naps that stretched late into the afternoon seemed to push back bedtime.

In this study, naps didn’t affect the total amount of sleep each child got. Instead, the distribution of sleep across day and night changed. That means you probably can’t tinker with your toddler’s nap schedule without also tinkering with her nighttime sleep. In a way, that’s reassuring: It makes it harder to screw up the nap in a way that leads to a sleep-deprived child. If daytime sleep is lacking, your child will probably make up for it at night.

A sleeping child looks blissfully relaxed, but beneath that quiet exterior, the body is doing some incredible work. New concepts and vocabulary get stitched into the brain. The immune system hones its ability to bust germs. And limbs literally stretch. Babies grew longer in the four days right after they slept more than normal, scientists reported in Sleep in 2011. Scientists don’t yet know if this important work happens selectively during naps or night sleep.

Right now, both my 4-year-old and 2-year-old take post-lunch naps (and on the absolute best of days, those naps occur in glorious tandem). Their siestas probably push their bedtimes back a bit. But that’s OK with all of us. Long spring and summer days make it hard for my girls to go to sleep at 7:30 p.m. anyway. The times I’ve optimistically tried an early bedtime, my younger daughter insists I look out the window to see the obvious: “The sky is awake, Mommy.”

Here’s how an asteroid impact would kill you

It won’t be a tsunami. Nor an earthquake. Not even the crushing impact of the space rock. No, if an asteroid kills you, gusting winds and shock waves from falling and exploding space rocks will most likely be to blame. That’s one of the conclusions of a recent computer simulation effort that investigated the fatality risks of more than a million possible asteroid impacts.

In one extreme scenario, a simulated 200-meter-wide space rock whizzing 20 kilometers per second whacked London, killing more than 8.7 million people. Nearly three-quarters of that doomsday scenario’s lethality came from winds and shock waves, planetary scientist Clemens Rumpf and colleagues report online March 27 in Meteoritics & Planetary Science.

In a separate report, the researchers looked at 1.2 million potential impactors up to 400 meters across striking around the globe. Winds and shock waves caused about 60 percent of the total deaths from all the asteroids, the team’s simulations showed. Impact-generated tsunamis, which many previous studies suggested would be the top killer, accounted for only around one-fifth of the deaths, Rumpf and colleagues report online April 19 in Geophysical Research Letters.
“These asteroids aren’t an everyday concern, but the consequences can be severe,” says Rumpf, of the University of Southampton in England. Even asteroids that explode before reaching Earth’s surface can generate high-speed wind gusts, shock waves of pressure in the atmosphere and intense heat. Those rocks big enough to survive the descent pose even more hazards, spawning earthquakes, tsunamis, flying debris and, of course, gaping craters.

While previous studies typically considered each of these mechanisms individually, Rumpf and colleagues assembled the first assessment of the relative deadliness of the various effects of such impacts. The estimated hazard posed by each effect could one day help leaders make one of the hardest calls imaginable: whether to deflect an asteroid or let it hit, says Steve Chesley, a planetary scientist at NASA’s Jet Propulsion Laboratory in Pasadena, Calif., who was not involved with either study.

The 1.2 million simulated impactors each fell into one of 50,000 scenarios, which varied in location, speed and angle of strike. Each scenario was run with 24 different asteroid sizes, ranging from 15 to 400 meters across. Asteroids in nearly 36,000 of the scenarios, or around 72 percent, descended over water.

The deadliness assessment began with a map of human populations and numerical simulations of the energies unleashed by falling asteroids. Those energies were then used alongside existing casualty data from studies of extreme weather and nuclear blasts to calculate the deadliness of the asteroids’ effects at different distances. Rumpf and his team focused on short-term impact effects, rather than long-term consequences such as climate change triggered by dust blown into the atmosphere.

(The kill count of each effect was calculated independently of the other effects, meaning people who could have died of multiple causes were counted multiple times. This double counting allows for a better comparison across effects, Rumpf says, but it does give deaths near the impact site more weight in calculations.)
While the most deadly impact killed around 117 million people, many asteroids posed no threat at all, the simulations revealed. More than half of asteroids smaller than 60 meters across — and all asteroids smaller than 18 meters across — caused zero deaths. Rocks smaller than 56 meters wide didn’t even make it to Earth’s surface before exploding in an airburst. Those explosions could still be deadly, though, generating intense heat that burns skin, high-speed winds that hurl debris and pressure waves that rupture internal organs, the team found.

Tsunamis became the dominant killer for water impacts, accounting for around 70 to 80 percent of the total deaths from each impact. Even with the tsunamis, though, water impacts were only a fraction as deadly on average as land-hitting counterparts. That’s because impact-generated tsunamis are relatively small and quickly lose steam as they traverse the ocean, the researchers found.

Land impacts, on the other hand, cause considerable fatalities through heat, wind and shock waves and are more likely to hit near large population centers. For all asteroids big enough to hit the land or water surface, heat, wind and shock waves continued to cause the most casualties overall. Land-based effects, such as earthquakes and blast debris, resulted in less than 2 percent of total deaths.

Deadly asteroid impacts are rare, though, Rumpf says. Most space rocks bombarding Earth are tiny and harmlessly burn up in the atmosphere. Bigger meteors such as the 20-meter-wide rock that lit up the sky and shattered windows around the Russian city of Chelyabinsk in 2013 only frequent Earth about once a century (SN Online: 2/15/13). Impacts capable of inducing extinctions, like the at least 10-kilometer-wide impactor blamed for the end of the dinosaurs 66 million years ago (SN: 2/4/17, p. 16), are even rarer, striking Earth roughly every 100 million years.
But asteroid impacts are scary enough that today’s astronomers scan the sky with automated telescopes scouting for potential impactors. So far, they’ve cataloged 27 percent of space rocks 140 meters or larger estimated to be whizzing through the solar system. Other scientists are crunching the numbers on ways to divert an earthbound asteroid. Proposals include whacking the asteroid like a billiard ball with a high-speed spacecraft or frying part of the asteroid’s surface with a nearby nuclear blast so that the vaporized material propels the asteroid away like a jet engine.

The recent research could offer guidance on how people should react to an oncoming impactor: whether to evacuate or shelter in place, or to scramble to divert the asteroid. “If the asteroid’s in a size range where the damage will be from shock waves or wind, you can easily shelter in place a large population,” Chesley says. But if the heat generated as the asteroid falls, impacts or explodes “becomes a bigger threat, and you run the risk of fires, then that changes the response of emergency planners,” he says.
Making those tough decisions will require more information about compositions and structures of the asteroids themselves, says Lindley Johnson, who serves as the planetary defense officer for NASA in Washington, D.C. Those properties in part determine an asteroid’s potential devastation, and the team didn’t consider how those characteristics might vary, Johnson says. Several asteroid-bound missions are planned to answer such questions, though the recent White House budget proposal would defund a NASA project to reroute an asteroid into the moon’s orbit and send astronauts to study it (SN Online: 3/16/17).

In the case of a potential impact, making decisions based on the average deaths presented in the new study could be misleading, warns Gareth Collins, a planetary scientist at Imperial College London. A 60-meter-wide impactor, for instance, caused on average about 6,300 deaths in the simulations. Just a handful of high-fatality events inflated that average, though, including one scenario that resulted in more than 12 million casualties. In fact, most impactors of that size struck away from population centers and killed no one. “You have to put it in perspective,” Collins says.

Why create a model of mammal defecation? Because everyone poops

An elephant may be hundreds of times larger than a cat, but when it comes to pooping, it doesn’t take the elephant hundreds of times longer to heed nature’s call. In fact, both animals will probably get the job done in less than 30 seconds, a new study finds.

Humans would probably fit in that time frame too, says Patricia Yang, a mechanical engineering graduate student at the Georgia Institute of Technology in Atlanta. That’s because elephants, cats and people all excrete cylindrical poop. The size of all those animals varies, but so does the thickness of the mucus lining in each animal’s large intestine, so no matter the mammal, everything takes about the same time — an average of 12 seconds — to come out, Yang and her colleagues conclude April 25 in Soft Matter.

But the average poop time is not the real takeaway here (though it will make a fabulous answer to a question on Jeopardy one day). Previous studies on defecation have largely come from the world of medical research. “We roughly know how it happened, but not the physics of it,” says Yang.

Looking more closely at those physical properties could prove useful in a number of ways. For example, rats are often good models for humans in disease research, but they aren’t when it comes to pooping because rats are pellet poopers. (They’re not good models for human urination, either, because their pee comes out differently than ours, in high-speed droplets instead of a stream.)

Also, since the thickness of the mucus lining is dependent on animal size, it would be better to find a more human-sized stand-in. Such work could help researchers find new treatments for constipation and diarrhea, in which the mucus lining plays a key role, the researchers note.

Animal defecation may seem like an odd topic for a mechanical engineer to take on, but Yang notes that the principles of fluid dynamics apply inside the body and out. Her previous research includes a study on animal urination, finding that, as with pooping, the time it takes for mammals to pee also falls within a small window. (The research won her group an Ig Nobel Prize in 2015.)

And while many would find this kind of research disgusting, Yang does not. “Working with poop is not that bad, to be honest,” she says. “It’s not that smelly.” Plus, she gets to go to the zoo and aquarium for her research rather than be stuck in the lab.
But the research does involve a lot of poop — and watching it fall. For the study, the researchers timed the how long it took for animals to defecate and calculated the velocity of the feces of 11 species. They filmed dogs at a park and elephants, giant pandas and warthogs at Zoo Atlanta. They also dug up 19 YouTube videos of mammals defecating. Surprisingly, there are a lot of those videos available, though not many were actually good for the research. “We wanted a complete event, from beginning to end,” Yang notes. Apparently not everyone interested in pooping animals bothers to capture a feces’ full fall.

The researchers also examined feces from dozens of mammal species. (They fall into two classes: Carnivores defecate “sinkers,” since their feces are full of heavy indigestible ingredients like fur and bones. Herbivores defecate less-dense “floaters.”) And they considered the thickness and viscosity of the mucus that lines mammals’ intestines and helps everything move along as well the rectal pressure that pushes the material. All this information went into a mathematical model of mammal defecation — which revealed the importance of the mucus lining.

Yang isn’t done with this line of research. The model she and her colleagues created applies only to mammals that poop like we do. There’s still the pellet poopers, like rats and rabbits, and wombats, whose feces look like rounded cubes. “I would like to complete the whole set,” she says. And, “if you’ve got a good team, it’s fun.”

When it’s hot, plants become a surprisingly large source of air pollution

Planting trees is often touted as a strategy to make cities greener, cleaner and healthier. But during heat waves, city trees actually boost air pollution levels. When temperatures rise, as much as 60 percent of ground-level ozone is created with the help of chemicals emitted by urban shrubbery, researchers report May 17 in Environmental Science & Technology.

While the findings seem counterintuitive, “everything has multiple effects,” says Robert Young, an urban planning expert at the University of Texas at Austin, who was not involved with the study. The results, he cautions, do not mean that programs focused on planting trees in cities should stop. Instead, more stringent measures are needed to control other sources of air pollution, such as vehicle emissions.
Benefits of city trees include helping reduce stormwater runoff, providing cooling shade and converting carbon dioxide to oxygen. But research has also shown that trees and other shrubs release chemicals that can interact with their surrounding environment, producing polluted air. One, isoprene, can react with human-made compounds, such as nitrogen oxides, to form ground-level ozone, a colorless gas that can be hazardous to human health. Monoterpenes and sesquiterpenes also react with nitrogen oxides, and when they do, lots of tiny particles, similar to soot, build up in the air. In cities, cars and trucks are major sources of these oxides.

In the new study, Galina Churkina of Humboldt University of Berlin and colleagues compared simulations of chemical concentrations emitted from plants in the Berlin-Brandenburg metropolitan area. The researchers focused on two summers: 2006, when there was a heat wave, and 2014, when temperatures were more typical.

At normal daily maximum summer temperatures, roughly 25° Celsius on average, plants’ chemical emissions contributed to about 6 to 20 percent of ozone formation in the simulations. At peak temperatures during the heat wave, when temperatures soared to over 30°C, plant emissions spiked, boosting their share of ozone formation to up to 60 percent. Churkina says she and colleagues were not surprised to see the seemingly contrary relationship between plants and pollution. “Its magnitude was, however, quite amazing,” she says.

The results, she notes, suggest that campaigns to add trees to urban spaces can’t be done in isolation. Adding trees will improve quality of life only if such campaigns are combined with the radical reduction of pollution from motorized vehicles and the increased use of clean energy sources, she says.

How a flamingo balances on one leg

A question flamingo researchers get asked all the time — why the birds stand on one leg — may need rethinking. The bigger puzzle may be why flamingos bother standing on two.

Balance aids built into the birds’ basic anatomy allow for a one-legged stance that demands little muscular effort, tests find. This stance is so exquisitely stable that a bird sways less to keep itself upright when it appears to be dozing than when it’s alert with eyes open, two Atlanta neuromechanists report May 24 in Biology Letters.
“Most of us aren’t aware that we’re moving around all the time,” says Lena Ting of Emory University, who measures what’s called postural sway in standing people as well as in animals. Just keeping the human body vertical demands constant sensing and muscular correction for wavering. Even standing robots “are expending quite a bit of energy,” she says. That could have been the case for flamingos, she points out, since effort isn’t always visible.
Ting and Young-Hui Chang of the Georgia Institute of Technology tested balance in fluffy young Chilean flamingos coaxed onto a platform attached to an instrument that measures how much they sway. Keepers at Zoo Atlanta hand-rearing the test subjects let researchers visit after feeding time in hopes of catching youngsters inclined toward a nap — on one leg on a machine. “Patience,” Ting says, was the key to any success in this experiment.

As a flamingo standing on one foot shifted to preen a feather or joust with a neighbor, the instrument tracked wobbles in the foot’s center of pressure, the spot where the bird’s weight focused. When a bird tucked its head onto its pillowy back and shut its eyes, the center of pressure made smaller adjustments (within a radius of 3.2 millimeters on average, compared with 5.1 millimeters when active).
Museum bones revealed features of the skeleton that might enhance stability, but bones alone didn’t tell the researchers enough. Deceased Caribbean flamingos a zoo donated to science gave a better view. “The ‘ah-ha!’ moment was when I said, ‘Wait, let’s look at it in a vertical position,’” Ting remembers. All of a sudden, the bird specimen settled naturally into one-legged lollipop alignment.

In flamingo anatomy, the hip and the knee lie well up inside the body. What bends in the middle of the long flamingo leg is not a knee but an ankle (which explains why to human eyes a walking flamingo’s leg joint bends the wrong way). The bones themselves don’t seem to have a strict on-off locking mechanism, though Ting has observed bony crests, double sockets and other features that could facilitate stable standing.

The bird’s distribution of weight, however, looked important for one-footed balance. The flamingo’s center of gravity was close to the inner knee where bones started to form the long column to the ground, giving the precarious-looking position remarkable stability. The specimen’s body wasn’t as stable on two legs, the researchers found.
Reinhold Necker of Ruhr University in Bochum, Germany, is cautious about calling one-legged stances an energy saver. “The authors do not consider the retracted leg,” says Necker, who has studied flamingos. Keeping that leg retracted could take some energy, even if easy balancing saves some, he proposes.

The new study takes an important step toward understanding how flamingos stand on one leg, but doesn’t explain why, comments Matthew Anderson, a comparative psychologist at St. Joseph’s University in Philadelphia. He’s found that more flamingos rest one-legged when temperatures drop, so he proposes that keeping warm might have something to do with it. The persistent flamingo question still stands.

Citizen scientists join the search for Planet 9

Astronomers want you in on the search for the solar system’s ninth planet.

In the online citizen science project Backyard Worlds: Planet 9, space lovers can flip through space images and search for this potential planet as well as other far-off worlds awaiting discovery.

The images, taken by NASA’s Wide-field Infrared Survey Explorer satellite, offer a peek at a vast region of uncharted territory at the far fringes of the solar system and beyond. One area of interest is a ring of icy rocks past Neptune, known as the Kuiper belt. Possible alignments among the orbits of six objects out there hint that a ninth planet exerting its gravitational influence lurks in the darkness (SN: 7/23/16, p. 9). The WISE satellite may have imaged this distant world, and astronomers just haven’t identified it yet. Dwarf planets, free-floating worlds with no solar system to call home (SN: 4/4/15, p. 22) and failed stars may also be hidden in the images.
The WISE satellite has snapped the entire sky several times, resulting in millions of images. With so many snapshots to sift through, researchers need extra eyes. At the Backyard Worlds website, success in spotting a new world requires sharp sight. You have to stare at what seems like thousands of fuzzy dots in a series of four false-color infrared images taken months to years apart and identify faint blobs that appear to move. Spot that movement and you may have found a new world.

But you can’t let blurry spots or objects moving in only a couple of the frames fool you: Image artifacts can look like convincing space objects. True detections come from slight shifts in the positions of red or whitish-blue dots. With so many dots to track, it’s best to break up an image into sections and then click through the four images section by section. This process can take hours. But think of the payoff — discovering a distant world no one has observed before.

Once you’ve marked any potential object of interest, the project’s astronomers take over. Jackie Faherty of the American Museum of Natural History in New York City and colleagues cross-reference the object’s coordinates with databases of celestial worlds. If the object does, in fact, appear to be a newbie, the team requests time on other telescopes to do follow-up. Those studies can reveal whether the object is a failed star or a planet.

So far, tens of thousands of citizen scientists have scoured images at Backyard Worlds. The team has identified five possible failed stars and had its first paper accepted for publication.

But there’s still much more to explore: The elusive Planet Nine might still be out there, disguised as a flash of dots.

When it comes to the flu, the nose has a long memory

After an influenza infection, the nose recruits immune cells with long memories to keep watch for the virus, research with mice suggests.

For the first time, this type of immune cell — known as tissue resident memory T cells — has been found in the nose, researchers report June 2 in Science Immunology. Such nasal resident memory T cells may prevent flu from recurring. Future nasal spray vaccines that boost the number of these T cells in the nose might be an improvement over current flu shots, researchers say.
It’s known that some T cell sentinels take up residence in specific tissues, including the brain, liver, intestines, skin and lungs. In most of these tissues, the resident memory T cells start patrolling after a localized infection. “They’re basically sitting there waiting in case you get infected with that pathogen again,” says Linda Wakim, an immunologist at the University of Melbourne in Australia. If a previous virus invades again, the T cells can quickly kill infected cells and make chemical signals, called cytokines, to call in other immune cells for reinforcement. These T cells can persist for years in most tissues.

It’s different in the lungs. There, resident memory T cells have shorter-term memories than ones that reside in other tissues, scientists have previously found. To see if all tissues in the respiratory tract have similarly forgetful immune cells, Wakim and colleagues tagged immune cells in mice and sprayed flu virus in the rodents’ noses. After infection, resident memory T cells settled into the nasal tissue. The researchers haven’t yet dissected any human noses, but it’s a pretty good bet they also contain resident memory T cells, Wakim says.
Unlike in the lungs, the nose T cells had long memories, persisting for a least a year. “For mice, that’s quite a long time, almost a third of their life,” Wakim says. She doesn’t yet know why there’s a difference between nose and lung T cell memories, but finding out may enable researchers to boost lung T cell memory.
Still, with nose T cells providing security, the lungs might not need much flu-fighting memory. Memory T cells that patrol only the upper respiratory tract could stop viruses from ever reaching the lungs, Wakim’s team found. An injection of virus under the skin didn’t produce any resident memory T cells in the respiratory tract. Those findings could mean that vaccines delivered via nasal spray instead of shots might stimulate memory T cell growth in the nose and could protect lungs from damage as well. A nasal spray called FluMist has had variable results in people. No one knows if that vaccine can produce nasal memory T cells.
It’s not surprising to find that the nose has its own resident memory T cell security force, says Troy Randall, a pulmonary immunologist at the University of Alabama at Birmingham. “But it’s a good thing to know and certainly they’re the first to show it.”

The discovery may direct some research away from the lungs and toward the nose, Randall says. Future research should focus on how the resident memory T cells work with memory B cells that produce antibodies against viruses and bacteria, he suggests.