Star-starved galaxies fill the cosmos

Not all galaxies sparkle with stars. Galaxies as wide as the Milky Way but bereft of starlight are scattered throughout our cosmic neighborhood. Unlike Andromeda and other well-known galaxies, these dark beasts have no grand spirals of stars and gas wrapped around a glowing core, nor are they radiant balls of densely packed stars. Instead, researchers find just a wisp of starlight from a tenuous blob.

“If you took the Milky Way but threw away about 99 percent of the stars, that’s what you’d get,” says Roberto Abraham, an astrophysicist at the University of Toronto.
How these dark galaxies form is unclear. They could be a whole new type of galaxy that challenges ideas about the birth of galaxies. Or they might be outliers of already familiar galaxies, black sheep shaped by their environment. Wherever they come from, dark galaxies appear to be ubiquitous. Once astronomers reported the first batch in early 2015 — which told them what to look for — they started picking out dark denizens in many nearby clusters of galaxies. “We’ve gone from none to suddenly over a thousand,” Abraham says. “It’s been remarkable.”
This haul of ghostly galaxies is puzzling on many fronts. Any galaxy the size of the Milky Way should have no trouble creating lots of stars. But it’s still unclear how heavy the dark galaxies are. Perhaps these shadowy entities are failed galaxies, as massive as our own but mysteriously prevented from giving birth to a vast stellar family. Or despite being as wide as the Milky Way, they could be relative lightweights stretched thin by internal or external forces.
Either way, with so few stars, dark galaxies must have enormous deposits of unseen matter to resist being pulled apart by the gravity of other galaxies.

Astronomers can’t resist a good cosmic mystery. With detections of these galactic oddballs piling up, there is a push to figure out just how many of these things are out there and where they’re hiding. “There are more questions than answers,” says Remco van der Burg, an astrophysicist at CEA Saclay in France. Cracking the code of dark galaxies could provide insight into how all galaxies, including the Milky Way, form and evolve.
Compound eye on the sky
Telescopes designed to detect faint objects have revealed the presence of many sizable but near-empty galaxies — officially known as “ultradiffuse galaxies.” The deluge of discoveries started in New Mexico, with a telescope that looks more like a honeycomb than a traditional observatory. Sitting in a park about 110 kilometers southwest of Roswell (a city that has turned extraterrestrials into a tourism industry), the Dragonfly telescope consists of 48 telephoto lenses; it started with three in 2013 and continues to grow. The lenses are divided evenly among two steerable racks, and each lens is hooked up to its own camera. Partly inspired by the compound eye found in dragonflies and other insects, this relatively small scope has revealed dim galaxies missed by other observatories.

The general rule for telescopes is that bigger is better. A large mirror or lens can collect more light and therefore see fainter objects. But even the biggest telescopes have a limitation: unwanted light. Every surface in a telescope is an opportunity for light coming in from any direction to reflect onto the image. The scattered light shows up as dim blobs, or “ghosts,” that can wash out faint detail in pictures of space or even mimic very faint galaxies.

Large dark galaxies look a lot like these ghosts, and so went unnoticed. But Dragonfly was designed to keep these splashes of light in check. Unlike most conventional professional telescopes, it has no mirrors. Precision antireflection coatings on the lenses keep scattered light to a minimum. And having multiple cameras pointed at the same part of the sky helps distinguish blobs of light bouncing around in the telescope from blobs that actually sit in deep space. If the same blob shows up in every camera, it’s probably real.

“It’s a very clever idea, very brilliant,” says astronomer Jin Koda of Stony Brook University in New York. “Dragonfly made us realize that there is a chance to find a new population of galaxies beyond the boundary of what we know so far.”

In spring 2014, researchers pointed Dragonfly at the well-studied Coma cluster, a conglomeration of thousands of galaxies. At a distance of about 340 million light-years, Coma is a close, densely packed collection of galaxies and a rich hunting ground for astronomers. A team led by Abraham and astronomer Pieter van Dokkum of Yale University was looking at the edges of galaxies for far-flung stars and stellar streams, evidence of the carnage left behind after small galaxies collided to build larger ones.
They were not expecting to find dozens of galaxies hiding in plain sight. “People have been studying Coma for 80 years,” Abraham says. “How could we find anything new there?” And yet, scattered throughout the cluster appeared 47 dark galaxies, many of them comparable in size to the Milky Way — tens of thousands to hundreds of thousands of light-years across (SN: 12/13/14, p. 9). This was perplexing. A galaxy that big should have no problem forming lots of stars, van Dokkum and colleagues noted in September in Astrophysical Journal Letters.

Hidden strength
Even more surprising, says Abraham, is that those galaxies survive in Coma, a cluster crowded with galactic bullies. A galaxy’s own gravity holds it together, but gravity from neighboring galaxies can pull hard enough to tear apart a smaller one. To create sufficient gravity to survive, a galaxy needs mass in the form of stars, gas and other cosmic matter. In a place like Coma, a galaxy needs to be fairly massive or compact. But with so few stars (and presumably so little mass) spread over a relatively large space, dark galaxies should have been shredded long ago. They are either recent arrivals to Coma or a lot stronger than they appear.

From what researchers have learned so far, dark galaxies seem to have been lurking for many billions of years. They are located throughout their home clusters, suggesting that they’ve had a long time to spread out among the other galaxies. And the meager stars they have are mostly red, indicating that they are very old. With this kind of longterm survival, dark galaxies probably have a hidden strength, most likely due to dark matter.

All galaxies are loaded with dark matter, a mysterious substance that reveals itself only via gravitational interactions with luminous gas and stars. Much of that dark matter sits in an extended blob (known as the halo) that reaches well beyond the visible edge of a galaxy. On average, dark matter accounts for about 85 percent of all the matter in the universe. Within the central regions of the dark galaxies in Coma, dark matter must make up about 98 percent of the mass for there to be enough gravity to keep the galaxy intact, van Dokkum and colleagues say. Dark galaxies appear to have similar fractions of dark matter focused near their cores as the Milky Way does throughout its broader halo.

Astronomers had never seen such a strong preference for dark matter in galaxies so large. The initial cache of galactic enigmas lured a slew of researchers to the hunt. They pored over existing images of Coma and other clusters, looking for more dark galaxies. These galaxies are so faint that they could easily blend in with a cluster’s background light or be mistaken for reflections within a telescope. But once the galaxy hunters knew what to look for, they were not disappointed — those first 47 were just the tip of the iceberg.

Looking at old images of Coma taken by the Subaru telescope in Hawaii, Koda and colleagues easily confirmed that those 47 were really there. But that wasn’t all. They found a total of 854 dark galaxies, 332 of which appeared to be roughly the size of the Milky Way (SN: 7/25/15, p. 11). They calculated that Coma could harbor more than 1,000 dark galaxies of all sizes — comparable to its number of known galaxies. Astronomer Christopher Mihos of Case Western Reserve University in Cleveland and colleagues, reporting in 2015 in Astrophysical Journal Letters, found three more in the Virgo cluster, a more sparsely populated but closer gathering of galaxies that’s a mere 54 million light-years away.

In June, van der Burg and collaborators reported another windfall in Astronomy & Astrophysics. Using the Canada-France-Hawaii Telescope atop Mauna Kea in Hawaii, they measured the masses of several galaxy clusters. Taking a closer look at eight clusters, all less than about 1 billion light-years away, the group found roughly 800 more ultradiffuse galaxies.

“As we go to bigger telescopes, we find more and more,” says Michael Beasley, an astrophysicist at Instituto de Astrofísica de Canarias in Santa Cruz de Tenerife, Spain. “We don’t know how many there are, but we know there are a lot of them.” There could even be more dark galaxies than bright ones.

Nature vs. nurture
What dark galaxies are and how they formed is still a mystery. There are many proposals, but with so little data, few conclusions. For the vast majority of dark galaxies, researchers know only how big and how bright each one is. Three so far have had their masses measured. Of those, two appear to have more in common masswise with some of the small galaxies that orbit the Milky Way, while the third is as massive as our galaxy itself — roughly 1 trillion times as massive as the sun.

A dark galaxy in the Virgo cluster, VCC 1287, and another in Coma, Dragonfly 17, each have a total mass of about 70 billion to 90 billion suns. But only about one one-thousandth of that or less is in stars. The rest is dark matter. That puts the total masses of these two galaxies on par with the Large Magellanic Cloud, the largest of the satellite galaxies that orbit the Milky Way. But focus on just the mass of the stars, and the Large Magellanic Cloud is about 35 times as large as Dragonfly 17 and roughly 100 times as large as VCC 1287.

A galaxy dubbed Dragonfly 44, however, is another story. It’s a dark beast, weighing about as much as the entire Milky Way and made almost entirely of dark matter, van Dokkum and colleagues report in September in Astrophysical Journal Letters. “It’s a bit of a puzzle,” Beasley says. “If you look at simulations of galaxy formation, you expect to have many more stars.” For some reason, this galaxy came up short.
The environment may be to blame. A cluster like Coma grows over time by drawing in galaxies from the space around it. As galaxies fall into the cluster, they feel a headwind as they plow through the hot ionized gas that permeates the cluster. The headwind can strip gas from an incoming galaxy. But galaxies need gas to form stars, which are created when self-gravity crushes a blob of dust and gas until it turns into a thermonuclear furnace. If a galaxy falls into the cluster just as it is starting to make stars, this headwind might remove enough gas to prevent many stars from forming, leaving the galaxy sparsely populated.

Or maybe there’s something intrinsic to a galaxy that turns it dark. A volley of supernovas or a prolific burst of star formation might drive gas out of the galaxy. Nicola Amorisco of the Max Planck Institute for Astrophysics in Garching, Germany, and Abraham Loeb of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass., suggest that ultradiffuse galaxies start off as small galaxies that spun rapidly as they formed. All galaxies rotate, but perhaps dark galaxies are a subset that twirl so fast that their stars and gas have spread out, turning them into diffuse blobs rather than star-building machines.

To test these and other ideas, astronomers are focused on two key pieces of information: the masses of these galaxies and their locations in the universe. Mass can help researchers distinguish between formation scenarios, such as whether or not dark galaxies are failed Milky Way–like behemoths. A survey of other locales would indicate whether dark galaxies are unique to big clusters such as Coma, suggesting that the environment plays a role in their creation. But if they turn up outside of clusters, isolated or with small groups of galaxies, then perhaps they’re just born that way.

There’s already a hint that dark galaxies depend more on nature than nurture. Yale astronomer Allison Merritt and colleagues reported in October online at arXiv.org that four ultradiffuse galaxies lurk in a small galactic gathering about 88 million light-years away, indicating that clusters aren’t the only place dark galaxies can be found. And van der Burg, in his survey of eight clusters, found that dark galaxies make up the same fraction of all galaxies in a cluster regardless of cluster mass — at least, for clusters weighing between 100 trillion and 1 quadrillion times the mass of the sun. About 0.2 percent of the mass of the stars is tied up in the dark galaxies. Since all eight clusters host roughly the same relative number of dark galaxies, that suggests that there is something intrinsic about a galaxy that makes it dark, van der Burg says.

What this all means for understanding how galaxies form is hard to say. These cosmic specters might be an entirely new entity that will require new ideas about galaxy formation. Or they could be one page from the galaxy recipe book. Timing, location and luck might send some of our heavenly neighbors toward a bright future and force others to fade into the background. Perhaps dark galaxies are a mixed bag, the end result of many different processes going on in a variety of environments.

“I see no reason why the universe couldn’t make these things in many ways,” Abraham says. “Part of the fun over the next few years will be to figure out which is in play in any particular galaxy and what sort of objects the universe has chosen to make.”

What is clear is that as astronomers push to new limits — fainter, farther, smaller — the universe turns up endless surprises. Even in Coma, a locale that has been intensively studied for decades, there are still things to discover. “There’s just a ton of stuff out there that we’re going to find,” Abraham says. “But what that is, I don’t know.”

Why crested penguins lay mismatched eggs

In crested penguin families, moms heavily favor offspring No. 2 from the start, and a new analysis proposes why. The six or seven species of crested (Eudyptes) penguins practice the most extreme egg favoritism known among birds, says Glenn Crossin of Dalhousie University in Halifax, Canada.

Females that lay two eggs produce a runty first egg weighing 18 to 57 percent less than the second, with some of the greatest mismatches among erect-crested and macaroni penguins. Some Eudyptes species don’t even incubate the first egg; royal penguins occasionally push it out of the nest entirely.
Biologists have proposed benefits for the unusual behavior: A sacrificial first egg might mark a claim to a nesting spot or improve chances of one chick surviving predators. But those ideas haven’t held up, Crossin says. He and Tony Williams of Simon Fraser University in Burnaby, Canada, propose in the Oct. 12 Proceedings of the Royal Society B that egg favoritism is just a downside of an open-water, migratory lifestyle.
Among the 16 penguin species that lay two eggs, only the Eudyptes species evolved what’s called a pelagic life, spending their nonbreeding season mostly at sea and migrating, in some cases considerable distances, to breeding sites.

Female crested penguins tend to lay their first eggs soon after arriving at a breeding site, meaning that the egg must have started its roughly 16-day development while mom was migrating. The biology of long swims, now encoded genetically, interferes with producing a full-sized egg. A puny first egg might just be a sign that mom is trying to do two things at once, Crossin says.

Year in review: How humans populated the globe

No paper or digital trails document ancient humans’ journey out of Africa to points around the globe. Fortunately, those intrepid travelers left a DNA trail. Genetic studies released in 2016 put a new molecular spin on humans’ long-ago migrations. These investigations also underscore the long trek ahead for scientists trying to reconstruct Stone Age road trips.

“I’m beginning to suspect that the ancient out-of-Africa process was complex, involving several migrations and subsequent extinctions,” says evolutionary geneticist Carles Lalueza-Fox of the Institute of Evolutionary Biology in Barcelona.
Untangling those comings, goings and dead ends increasingly looks like a collaborative job for related lines of evolutionary research — comparisons of DNA differences across populations of present-day people, DNA samples retrieved from the bones of ancient hominids, archaeological evidence, fossil finds and studies of ancient climates. It’s still hard to say when the clouds will part and a clear picture of humankind’s journey out of Africa will appear. Consider four papers published in October that featured intriguing and sometimes contradictory results.

Three new studies expanded the list of present-day populations whose DNA has been analyzed. The results suggest that most non-Africans have inherited genes from people who left Africa in a single pulse between about 75,000 and 50,000 years ago (SN: 10/15/16, p. 6). One team, studying DNA from 142 distinct human populations, proposed that African migrants interbred with Neandertals in the Middle East before splitting into groups that headed into Europe or Asia. Other scientists whose dataset included 148 populations concluded that a big move out of Africa during that time period erased most genetic traces of a smaller exodus around 120,000 years ago. A third paper found that aboriginal Australians and New Guinea’s native Papuans descend from a distinctive mix of Eurasian populations that, like ancestors of other living non-Africans, trace back to Africans who left their homeland around 72,000 years ago.

The timing of those migrations may be off, however. A fourth study, based on climate and sea level data, identified the period from 72,000 to 60,000 years ago as a time when deserts largely blocked travel out of Africa. Computer models suggested several favorable periods for intercontinental travel, including one starting around 59,000 years ago. But archaeological finds suggest that humans had already spread across Asia by that time.
Clashing estimates of when ancient people left Africa should come as no surprise. To gauge the timing of these migrations, scientists have to choose a rate at which changes in DNA accumulate over time. Evolutionary geneticist Swapan Mallick of Harvard Medical School and the other authors of one of the new genetics papers say that the actual mutation rate could be 30 percent higher or lower than the mutation rate they used. Undetermined levels of interbreeding with now-extinct hominid species other than Neandertals may also complicate efforts to retrace humankind’s genetic history (SN: 10/15/16, p. 22), as would mating between Africans and populations that made return trips.
“This can be clarified, to some extent, with genetic data from ancient people involved in out-of-Africa migrations,” says Lalueza-Fox. So far, though, no such data exist.

The uncertainty highlights the need for more archaeological evidence. Though sites exist in Africa and Europe dating from more than 100,000 years ago to 10,000 years ago, little is known about human excursions into the Arabian Peninsula and the rest of Asia. Uncovering more bones, tools and cultural objects will help fill in the picture of how humans traveled, and what key evolutionary transitions occurred along the way.

Mallick’s team has suggested, for example, that symbolic and ritual behavior mushroomed around 50,000 years ago, in the later part of the Stone Age, due to cultural changes rather than genetic changes. Some archaeologists have proposed that genetic changes must have enabled the flourishing of personal ornaments and artifacts that might have been used in rituals. But comparisons of present-day human DNA to that of Neandertals and extinct Asian hominids called Denisovans don’t support that idea. Instead, another camp argues, humans may have been capable of these behaviors some 200,000 years ago.

Nicholas Conard, an archaeologist at the University of Tübingen in Germany, approaches the findings cautiously. “I do not assume that interpretations of the genetic data are right,” he says. Such reconstructions have been revised and corrected many times over the last couple of decades, which is how “a healthy scientific field moves forward,” Conard adds. Collaborations connecting DNA findings to archaeological discoveries are most likely to produce unexpected insights into where we come from and who we are.

Motherhood might actually improve memory

You may have read the news this week that pregnancy shrinks a mother’s brain. As a mom-to-be’s midsection balloons, areas of her cerebral cortex wither, scientists reported online December 19 in Nature Neuroscience.

Yes, that sounds bad. But don’t fret. As I learned in reporting that story, a smaller brain can be more efficient and specialized. In fact, post-pregnancy brains could be considered evolutionary works of art, perfectly sculpted to better respond to their babies. The researchers found that the brain regions most changed during pregnancy are the ones that fire up when mothers see pictures of their babies. Pregnancy (and possibly childbirth) may make these neural networks sleeker and stronger, helping moms to tune in to their infants.

As someone whose brain has shriveled at least one time, maybe twice (scientists don’t know if the brain keeps getting smaller with subsequent pregnancies), I find it fascinating to think about this remodeling. The lingering question, however, is whether those brain changes relate to a mother’s smarts. The world abounds with anecdotal attacks of baby brain and placenta dementia (a name that both entertains and offends me), but are the conditions real? Do pregnant women and new moms really turn into forgetful, bumbling idiots?

Study coauthor Elseline Hoekzema, a neuroscientist at Leiden University in the Netherlands, says that the data on this are fuzzy. “It is not well-established whether there are objective changes in memory as a result of pregnancy,” she says. Some studies find effects, while others find none. Research round-ups indicate that certain kinds of memory may be affected, leaving others unscathed. In their study of 25 first-time mothers, Hoekzema and her colleagues didn’t find any memory changes from pre-pregnancy to the months after they gave birth. This study didn’t test the women while they were pregnant, though.

But there are signs of memory slips during pregnancy and the immediate aftermath in both people and animals, says neuroscientist Liisa Galea of the University of British Columbia in Vancouver. Those results vary depending on trimester, fetal sex and other factors, she says. My first thought on hearing those results was, “of course.” Anyone forced to sleep in two-hour increments for months at a time will have trouble remembering things. But Galea says that extreme exhaustion can’t account for the deficits.

Lest mothers despair, Galea pointed me to some different research by her and others that indicates after this early rough spell, motherhood may actually make the brain stronger. In a maze test, first-time rat mothers that were no longer nursing their babies actually outperformed rats that had never given birth. And rats that had been pregnant multiple times outperformed non-mother rats on a different memory test, Galea says.

What’s more, motherhood may help keep the brain young. When tested at the ripe old age of 24 months, rats that had given birth earlier in life performed better on tests of learning and memory than rats that had not given birth. Those results suggest that something about motherhood — perhaps the stew of hormones and the brain changes that follow — may actually protect the brain as it ages.

Despite the spotty scientific literature on these sorts of changes in women, Galea thinks the evidence suggests that there’s a temporary dip in memory during pregnancy and the early postpartum period, followed by not just a recovery, but an actual improvement. “Pregnancy and motherhood are dramatic life-changing events that can have long-lasting repercussions in the brain,” she says. And it’s quite likely that some of those repercussions might be good.

Earliest galaxies got the green light

GRAPEVINE, TEXAS — Green was all the rage a couple of billion years after the Big Bang.

Galaxies in the early universe blasted out a specific wavelength of green light, researchers reported January 7 at a meeting of the American Astronomical Society. It takes stars much hotter than most stars found in the modern universe to make that light. The finding offers a clue to what the earliest generation of stars might have been like (SN: 10/1/16, p. 25).
Some nearby galaxies and nebulas produce a little bit of this hue today. But these early galaxies, seen as they were roughly 11 billion years ago, produce an overwhelming amount. “Everybody was doing it,” said Matthew Malkan, an astrophysicist at UCLA. “It seems like all galaxies started this way.”

Malkan and colleagues used the United Kingdom Infrared Telescope in Hawaii and the Spitzer Space Telescope to collect the light from over 5,000 galaxies. They found that, in all of these galaxies, one wavelength of green light — now stretched to infrared by the expansion of the universe — was twice as bright compared with light from the typical mix of stars and gas seen in galaxies today.

The green light comes from oxygen atoms that have lost two of their electrons. To knock off two electrons requires harsh ultraviolet radiation, possibly from lots of extremely hot stars — each roughly 50,000° Celsius. The sun, by comparison, is about a paltry 5,500° C at its surface.

“Stars must have been much hotter than most energetic stars familiar to us today,” said Malkan. How they got so hot — perhaps via exotic chemical abundances or just piling on lots of mass — is unsettled.

Real-life adventure tale details search for legendary city

Legend has it that hundreds of years ago, a rich, powerful city stood in the jungle of what is now eastern Honduras. Then, suddenly, all of the residents vanished, and the abandoned city became a cursed place — anyone who entered risked death.

In a captivating real-life adventure tale, journalist and novelist Douglas Preston argues that the legend is not complete fiction. The Lost City of the Monkey God is his firsthand account of the expedition that uncovered the sites of at least two large cities, along with other settlements, that may form the basis of the “White City” myth. Even the curse might be rooted in reality.
Stories of the White City, so named because it was supposedly built of white stone, trace back to the Spanish conquistadores of the 16th century, Preston explains. These stories enthralled filmmaker Steve Elkins, who set out in the mid-1990s to uncover the truth. Finding the ruins of an ancient culture in one of the most remote parts of Central America would require a combination of high-tech remote sensing, old-fashioned excavation and persistence.

Elkins enlisted the help of experts who used satellite images and lidar to find potential targets to explore. Lidar involves shooting laser pulses from above to sketch out the contours of a surface, even a thickly vegetated one. The resulting maps revealed outlines of human-made structures in several locations. Preston deftly explains the science behind this work and makes it exciting (being crammed into a small, rickety plane for hours on end requires its own kind of bravery).
By 2015, archaeologists, accompanied by a film crew and Preston, hit the ground to investigate. They weren’t disappointed. The team uncovered an earthen pyramid, other large mounds, a plaza, terraces, canals, hundreds of ornate sculptures and vessels, and more. These discoveries are providing clues to the identity of the people who lived there and what happened to them. What’s clear is that they belonged to a culture distinct from their Maya neighbors.
This culture probably prospered for several hundred years, perhaps longer, before vanishing around 1500. Drawing on historical evidence, Preston argues that disease brought by Europeans was the culture’s downfall. A series of epidemics, perhaps smallpox, may have prompted people to desert the area, inspiring the myth’s curse.
The expedition did not escape this curse. Preston and others brought back a parasitic infection known as leishmaniasis. Preston devotes the last quarter of the book to detailing his and others’ struggle to deal with this potentially fatal disease.

The Lost City of the Monkey God is at its best when Preston recounts his time in the field. He presents an unglorified look at doing fieldwork in a rainforest, contending with poisonous snakes, hordes of biting pests and relentless rain and mud. He also offers a window into the politics of science, offering a frank appraisal of the criticism and skepticism this unconventional expedition (paid for by a filmmaker) faced.

Much of the book is a thrill to read, but by the end, it takes a more somber tone. The “White City” faces threats of looting and logging. And researchers who go there risk contracting disease. Some readers may wonder whether the discovery was worth it. Perhaps some mysteries are better left unsolved.

Oxygen atoms from Earth bombard the moon

Life on Earth may have made its mark on the moon billions of years before Neil Armstrong’s famous first step.

Observations by Japan’s moon-orbiting Kaguya spacecraft suggest that oxygen atoms from Earth’s upper atmosphere bombard the moon’s surface for a few days each month. This oxygen onslaught began in earnest around 2.4 billion years ago when photosynthetic microbes first flourished (SN Online: 9/8/15), planetary scientist Kentaro Terada of Osaka University in Japan and colleagues propose January 30 in Nature Astronomy.

The oxygen atoms begin their incredible journey in the upper atmosphere, where they are ionized by ultraviolet radiation, the researchers suggest. Electric fields or plasma waves accelerate the oxygen ions into the magnetic cocoon that envelops Earth. One side of that magnetosphere stretches away from the sun like a flag in the wind. For five days each lunar cycle, the moon passes through the magnetosphere and is barraged by earthly ions, including oxygen.

Based on Kaguya’s measurements of this space-traveling oxygen in 2008, Terada and colleagues estimate that at least 26,000 oxygen ions per second hit each square centimeter of the lunar surface during the five-day period. The uppermost lunar soil may, therefore, preserve bits of Earth’s ancient atmosphere, the researchers write, though determining which atoms blew over from Earth or the sun would be difficult.

A diet of corn turns wild hamsters into cannibals

The first sign that something was wrong was that the female hamsters were really active in their cages. These were European hamsters, a species that is endangered in France and thought to be on the decline in the rest of their Eurasian range. But in a lab at the University of Strasbourg in France, the hamsters were oddly aggressive, and they didn’t give birth in their nests.

Mathilde Tissier, a conservation biologist at the University of Strasbourg, remembers seeing the newly born pups alone, spread around in the cages, while their mothers ran about. Then, the mother hamsters would take their pups and put them in the piles of corn they had stored in the cage, Tissier says, and eat their babies alive.

“I had some really bad moments,” she says. “I thought I had done something wrong.”

Tissier and her colleagues had been looking into the effect of wheat- and corn-based diets in European hamsters because the rodent’s population in France was quickly disappearing. It now numbers only about 1,000 animals, most of which live in farm fields. The hamsters, being burrowers, are important for the local ecosystem and can promote soil health. But more than that, they’re an umbrella species, Tissier notes. Protect them, and their habitat, and there will be benefits for the many other farmland species that are declining.

A typical corn field is some seven times larger than the home range for a female hamster, so the animals that live in these agricultural areas eat mostly corn — or whatever other crop is growing in that field. But not all crops provide the same level of nutrition, and Tissier and her colleagues were curious about how that might affect the hamsters. Perhaps there would be differences in litter size or pup growth, they surmised. So they began an experiment, feeding hamsters wheat or corn in the lab, with either clover or earthworms to better reflect the animals’ normal, omnivorous diets.

“We thought [the diets] would create some [nutritional] deficiencies,” Tissier says. But instead, Tissier and her colleagues saw something very different. All the female hamsters were able to successfully reproduce, but those fed corn showed abnormal behaviors before giving birth. They then gave birth outside their nests and most ate their young on the first day after birth. Only one female weaned her pups, though that didn’t have a happy ending either — the two brothers ate their female siblings, Tissier and her colleagues report January 18 in the Proceedings of the Royal Society B.

Tissier spent a year trying to figure out what was going on. Hamsters and other rodents will eat their young, but it is usually when a baby has died and the mother hamster wants to keep her nest clean. They don’t normally eat healthy babies alive. The researchers reared more hamsters in the lab, this time supplementing their maize and earthworm diet with a solution of niacin. This time, the hamsters raised their young normally, and not as a snack.

Unlike wheat, corn lacks a number of micronutrients, including niacin. In people who subsist on a diet of mostly corn, that niacin deficiency can result in a disease called pellagra. The disease emerged in the 1700s in Europe after corn became a dietary staple. People with pellagra experienced horrible rashes, diarrhea and dementia. Until the disease’s cause was identified in the mid-20th century, millions of people suffered and thousands died. (The meso-Americans who domesticated corn largely did not have this problem because they processed corn with a technique called nixtamalization, which frees bound niacin in corn and makes it available as a nutrient. The Europeans who brought corn back to their home countries didn’t bring back this process.)

The European hamsters fed corn-based diets exhibited symptoms similar to pellagra, and this is probably happening in the wild, Tissier says. She notes that officials with the French National Office for Hunting and Wildlife have seen hamsters in the wild subsisting on mostly corn and eating their pups.

Tissier and her colleagues are now working to find ways to improve diversity in agricultural systems, so that hamsters — and other creatures — can eat a more well-balanced diet. “The idea is not only to protect the hamster,” she says, “but to protect the entire biodiversity and to restore good ecosystems, even in farmland.”

Speech recognition has come a long way in 50 years

Computers that hear

Computer engineers have dreamed of a machine that would translate speech into something that a vacuum tube or transistor could understand. Now at last, some promising hardware is being developed…. It is still a long way from the kind of science fiction computer that can understand sentences or long speeches. — Science News, March 4, 1967

Update
That 1967 device knew the words one through nine. Earlier speech recognition devices sliced a word into segments and analyzed them for absolute loudness. But this machine, developed by Genung L. Clapper at IBM, identified the volume of a pitch segment compared with its neighbors to account for the variability of human speech. Today’s speech recognition goes much further, dividing words into distinct units of sound and syntax. The software decodes speech by applying pattern recognition and a statistical method called the hidden Markov model to the sounds. We rely on speech recognition to open an app to order groceries or to send a text to ask someone at home if we need more milk. Hello, Siri.

Nudging people to make good choices can backfire

Nudges are a growth industry. Inspired by a popular line of psychological research and introduced in a best-selling book a decade ago, these inexpensive behavior changers are currently on a roll.

Policy makers throughout the world, guided by behavioral scientists, are devising ways to steer people toward decisions deemed to be in their best interests. These simple interventions don’t force, teach or openly encourage anyone to do anything. Instead, they nudge, exploiting for good — at least from the policy makers’ perspective — mental tendencies that can sometimes lead us astray.

But new research suggests that low-cost nudges aimed at helping the masses have drawbacks. Even simple interventions that work at first can lead to unintended complications, creating headaches for nudgers and nudgees alike.

Nudge proponents, an influential group of psychologists and economists known as behavioral economists, follow a philosophy they dub libertarian paternalism. This seemingly contradictory phrase refers to a paternalistic desire to promote certain decisions via tactics that preserve each person’s freedom of choice. Self-designated “choice architects” design nudges to protect us from inclinations that might not serve us well, such as overconfidence, limited attention, a focus on now rather than later, the tendency to be more motivated by losses than gains and intuitive flights of fancy.

University of Chicago economist Richard Thaler and law professor Cass Sunstein, now at Harvard University, triggered this policy movement with their 2008 book Nudge. Thaler and Sunstein argued that people think less like an economist’s vision of a coldly rational, self-advancing Homo economicus than like TV’s bumbling, doughnut-obsessed Homer Simpson.
Choice architects like to prod with e-mail messages, for example, reminding a charity’s past donors that it’s time to give or telling tardy taxpayers that most of their neighbors or business peers have paid on time. To nudge healthier eating, these architects redesign cafeterias so that fruits and vegetables are easier to reach than junk food.

A popular nudge tactic consists of automatically enrolling people in organ-donation programs and retirement savings plans while allowing them to opt out if they want. Until recently, default choices for such programs left people out unless they took steps to join up. For organ donation, the nudge makes a difference: Rates of participation typically exceed 90 percent of adults in countries with opt-out policies and often fall below 15 percent in opt-in countries, which require explicit consent.

Promising results of dozens of nudge initiatives appear in two government reports issued last September. One came from the White House, which released the second annual report of its Social and Behavioral Sciences Team. The other came from the United Kingdom’s Behavioural Insights Team. Created by the British government in 2010, the U.K. group is often referred to as the Nudge Unit.

In a September 20, 2016, Bloomberg View column, Sunstein said the new reports show that nudges work, but often increase by only a few percentage points the number of people who, say, receive government benefits or comply with tax laws. He called on choice architects to tackle bigger challenges, such as finding ways to nudge people out of poverty or into higher education.

Missing from Sunstein’s comments and from the government reports, however, was any mention of a growing conviction among some researchers that well-intentioned nudges can have negative as well as positive effects. Accepting automatic enrollment in a company’s savings plan, for example, can later lead to regret among people who change jobs frequently or who realize too late that a default savings rate was set too low for their retirement needs. E-mail reminders to donate to a charity may work at first, but annoy recipients into unsubscribing from the donor list.

“I don’t want to get rid of nudges, but we’ve been a bit too optimistic in applying them to public policy,” says behavioral economist Mette Trier Damgaard of Aarhus University in Denmark.

Nudges, like medications for physical ailments, require careful evaluation of intended and unintended effects before being approved, she says. Policy makers need to know when and with whom an intervention works well enough to justify its side effects.

Default downer
That warning rings especially true for what is considered a shining star in the nudge universe — automatic enrollment of employees in retirement savings plans. The plans, called defaults, take effect unless workers decline to participate.

No one disputes that defaults raise participation rates in retirement programs compared with traditional plans that require employees to sign up on their own. But the power of opt-out plans to kick-start saving for retirement stayed under the radar until it was reported in the November 2001 Quarterly Journal of Economics.

When the company in the 2001 study — a health and financial services firm with more than 10,000 employees — switched from voluntary to automatic enrollment in a retirement savings account, employee participation rose from about 37 percent to nearly 86 percent.

Similar findings over the next few years led to passage of the U.S. Pension Protection Act of 2006, which encouraged employers to adopt automatic pension enrollment plans with increasing savings contributions over time.

But little is known about whether automatic enrollees are better or worse off as time passes and their personal situations change, says Harvard behavioral economist Brigitte Madrian. She coauthored the 2001 paper on the power of default savings plans.

Although automatic plans increase savings for those who otherwise would have squirreled away little or nothing, others may lose money because they would have contributed more to a self-directed retirement account, Madrian says. In some cases, having an automatic savings account may encourage irresponsible spending or early withdrawals of retirement money (with penalties) to cover debts. Such possibilities are plausible but have gone unstudied.

In line with Madrian’s concerns, mathematical models developed by finance professor Bruce Carlin of the University of California, Los Angeles and colleagues suggest that people who default into retirement plans learn less about money matters, and share less financial information with family and friends, than those who join plans that require active investment choices.

Opt-out savings programs “have been oversimplified to the public and are being sold as a great way to change behavior without addressing their complexities,” Madrian says. Research needs to address how well these plans mesh with individuals’ personalities and decision-making styles, she recommends.
Delay and regret
By comparing procrastinators with more decisive folks in one large retirement system, economist Jeffrey Brown examined how individual differences influence whether people join and stay happy with opt-out savings programs. Procrastinators were not only more likely to end up in a default plan but also more apt to regret that turn of events down the road, says Brown, of the University of Illinois at Urbana-Champaign.

Among state employees at the university who were offered any of three retirement plans, those who delayed making decisions were particularly likely to belong to a default plan and to want to switch to another plan, Brown and colleagues reported in September 2016 in the Journal of Financial Economics. These plans serve as a substitute for Social Security and often represent an employee’s largest financial asset. The default plan is generous toward those who stay long enough to retire from the state system but less so to those who leave early. A second plan allows for a larger cash refund upon leaving the system early. A third plan enables savers to direct contributions to any of a variety of investments. Being dumped into the default plan isn’t always the best option, especially because initial plan choices are permanent.

More than 6,000 employees who joined the retirement system in or after 1999 completed e-mail questionnaires in 2012. When asked what they would do if they could go back and redo their savings choice, 17 percent of defaulters reported a strong desire to change plans. Only about 7 percent of those who actively selected a plan and 8 percent of those who intentionally chose the default wanted to change.

The likelihood of having been assigned to the default plan and wanting to switch to another plan increased steadily as employees reported higher levels of procrastination. Implications of this finding are not entirely clear, Madrian says. Individuals in the default savings plan either by choice or procrastination may, for instance, regret lots of events in their lives. If so, they can’t easily be compared with less regretful folks who chose another plan.

Requiring people to make an active choice of a retirement plan, even if they’re procrastinators, might reduce regret down the road, Madrian suspects. But given a complex, high-stakes choice — such as that faced by Illinois university employees — “it may still make sense to set a default option even if some individuals who end up in the default will regret it later.”

Researchers need to determine how defaults and other nudges instigate behavior changes before unleashing them on the public, says philosopher of science Till Grüne-Yanoff of the Royal Institute of Technology in Stockholm.
Hidden costs
Sometimes well-intentioned, up-front attempts to get people to do what seems right come back to bite nudgers on the bottom line.

Consider e-mail prompts and reminders. Although nudges were originally conceived to encourage people to accept an option unthinkingly, simple attempts to curb forgetfulness and explain procedures now get folded into the nudge repertoire. Short-term success stories abound for these inexpensive messages. The 2016 report of the U.S. Social and Behavioral Sciences Team cites a case in which e-mails sent by the Department of Education to student-loan recipients, which described how to apply for a federal repayment plan, led 6,000 additional borrowers to sign up for the plan in the following three months, relative to borrowers who did not receive the explanatory e-mail. Messages were tailored to borrowers’ circumstances, such as whether they previously expressed interest in the payback plan or had stopped making loan repayments.

The U.K. Behavioural Insights Team — now a global company with offices in Britain, North America, Australia and Singapore — also sees value in short, informational nudges.

One of the company’s projects produced an unexpected twist. Low-income New Orleans residents who hadn’t seen a primary care physician in more than two years — 21,442 of them — received one of three text messages to set up a free medical appointment. Telling people that they had been selected for a free appointment worked best, leading 1.4 percent of recipients to sign up, versus 1 percent of those who got an information-only text. But a text asking people to “take care of yourself so you can take care of the ones you love” backfired, resulting in only 0.7 percent of recipients making appointments. Uptake for all three groups was low, but the study suggested that nudges that unwittingly trigger bad feelings (guilt or shame) can easily go awry, Aarhus University’s Damgaard says.
A case in point is a study submitted for publication by Damgaard and behavioral economist Christina Gravert of the University of Gothenburg in Sweden. E-mailed donation reminders sent to people who had contributed to a Danish anti-poverty charity increased the number of donations in the short term, but also triggered an upturn in the number of people unsubscribing from the list.

People’s annoyance at receiving reminders perceived as too frequent or pushy cost the charity money over the long haul, Damgaard holds. Losses of list subscribers more than offset the financial gains from the temporary uptick in donations, she and Gravert conclude.

“Researchers have tended to overlook the hidden costs of nudging,” Damgaard says.

In one experiment, more than 17,000 previous donors to a Danish charity received an e-mail asking them to donate within 10 days. About half received an additional reminder one week later. Reminders yielded 46 donations, versus 30 donations from people sent only one e-mail. But over the next month, 318 reminded donors unsubscribed from the e-mail list, as opposed to 186 of those who received one e-mail. To Damgaard and Gravert, reminders were money losers — especially if sent more than once.

A second experiment examined more than 43,000 Danish charity donors split into three groups. The number of unsubscribers reached 71 among those sent an e-mail informing them that digital reminders would be sent every month. Among those receiving the same e-mail plus an announcement that only one reminder would be sent in the next three months, 44 people abandoned the mailing list. That’s what a digital sigh of relief looks like. An e-mail that combined a notice of monthly reminders with a promise of a donation from an anonymous sponsor for every mailing list donation slightly lowered annoyance at the prospect of monthly reminders — 52 unsubscribed.

The limits of nudge
There are at least two ways to think about unintended drawbacks to nudges. Behavioral economists including Damgaard take an optimistic stance. They see value in determining how nudges work over the long haul, for better and worse. In that way, researchers can target people most likely to benefit from specific nudges. Few schemes to change behavior, including nudges, alter people’s lives for the better in a big and lasting way, cautions Harvard behavioral economist and nudge proponent Todd Rogers. “One of the most important questions in behavioral science right now is, how do we induce persistent behavior change?”

But those already critical of libertarian paternalism say that new findings back up their pessimistic view of what nudges can do. When past charity donors in Denmark fork over more money in response to an e-mail reminder and then bolt from the mailing list, as reported by Damgaard and Gravert, they’re demonstrating that even a small-scale nudge can trigger resistance, says political scientist Frank Mols of the University of Queensland in Brisbane, Australia. “It verges on ridiculous to claim that nudges can change attitudes or behavior related to huge social problems, such as crime or climate change,” he says.

Nudges wrongly assume that each person makes decisions in isolation, Mols contends. People belong to various groups that frame the way they make sense of the world, he says. Rather than nudging, lasting behavior change entails persuasion techniques long exploited by advertisers: altering how people view their social identities. Coors beer, for instance, has long been marketed to small-town folks and city dwellers alike as the choice of rugged, outdoorsy individualists.
In the noncommercial realm, Mols points to a successful 2006 campaign to reduce water use in Queensland during a severe drought. Average per capita water use dropped substantially and stayed lower after the drought broke in 2009. That’s because the campaign included advertisements targeting citizens’ view of themselves as “Queenslanders,” he says. A good Queenslander became redefined as a “water-wise” person who consumed as little of the resource as possible.

Queensland’s persuasive approach to water conservation avoided ethical concerns that dog nudges, Mols adds. Choice architects’ conviction that people possess biased minds in need of expert guidance to achieve good lives cuts off debate about what constitutes a good life, he argues.

Elspeth Kirkman, a policy implementation specialist who heads the U.K. Behavioural Insights Team’s North American office in New York City, sees no ethical problem with nudges that people can reject anytime they want. But she acknowledges that ethical gray areas exist. “It’s not always clear when an intervention is a nudge and when it’s coercive manipulation,” she says. Nudge carefully and monitor an intervention’s intended and unintended effects for as long as possible, Kirkman advises.

Even amid calls for caution, nudges are expanding their reach. With input from the Behavioural Insights Team, a U.K. law passed in March 2016 and slated to take effect in April 2018 imposes a soft drink tax that rises with increasing sugar content. The law aims to encourage soft drink companies to switch from high-sugar products to artificially sweetened and low-sugar beverages in an effort to reduce obesity. The U.K. soft drink firm Lucozade Ribena Suntory and the retail company Tesco announced last November plans to cut sugar in soft drinks by at least 50 percent to escape the looming tax.

The law might prod consumers to change too, if companies stand their ground but raise prices of high-sugar drinks due to the new tax, Kirkman predicts.

In nudges as in life, though, the best-laid plans can tank. Perhaps scientists will discover serious health risks in artificial sweeteners currently considered safe, reviving soda makers’ sugar dependence. Maybe a black market for old-school soda will pop up in Britain, sending soft drink lovers to back-alley Coke dealers for sugar fixes.

The law of unintended consequences is always taxing.
This article appears in the March 18, 2017, issue of Science News with the headline, “Nudge Backlash: Steering people’s decisions with simple tactics can come with a downside.”