The virus, which can cause brain damage in infants infected in the womb, kills stem cells and stunts their numbers in the brains of adult mice, researchers report August 18 in Cell Stem Cell. Though scientists have considered Zika primarily a threat to unborn babies, the new findings suggest that the virus may cause unknown — and potentially long-term — damage to adults as well.
In adults, Zika has been linked to Guillain-Barré syndrome, a rare neurological disorder (SN: 4/2/16, p. 29). But for most people, infection is typically mild: a headache, fever and rash lasting up to a week, or no symptoms at all. In pregnant women, though, the virus can lodge in the brain of a fetus and kill off newly developing cells (SN: 4/13/16). If Zika targets newborn brain cells, adults may be at risk, too, reasoned neuroscientist Joseph Gleeson of Rockefeller University in New York City and colleagues. Parts of the forebrain and the hippocampus, which plays a crucial role in learning and memory, continue to generate nerve cells in adult brains.
In mice infected with Zika, the virus hit these brain regions hard. Nerve cells died and the regions generated one-fifth to one-half as many new cells compared with those of uninfected mice. The results might not translate to humans; the mice were genetically engineered to have weak immune systems, making them susceptible to Zika.
But Zika could potentially harm immunocompromised people and perhaps even healthy people in a similar way, the authors write.
A melting snow patch in Greenland has revealed what could be the oldest fossilized evidence of life on Earth. The 3.7-billion-year-old structures may help scientists retrace the rise of the first organisms relatively soon after Earth’s formation around 4.5 billion years ago (SN: 2/8/14, p. 16), the discoverers report online August 31 in Nature.
Unlike dinosaur bones, the new fossils are not preserved bits of an ancient critter. The Greenland fossils are mounds of minerals a few centimeters tall that may have been deposited by clusters of microbes several hundred million years after Earth formed. The shape and chemical composition of the mounds, called stromatolites, match those formed by modern bacterial communities living in shallow seawater, says a team led by geologist Allen Nutman of the University of Wollongong in Australia.
If confirmed, the fossils demonstrate that sophisticated, mound-building microbial life appeared early on in Earth’s history. That early start backs up previous genetic and chemical studies that place the advent of basic life on Earth before 4 billion years ago (SN Online: 10/19/15).
Aneil Agrawal, his rangy frame at ease on a black metal street bench, is staring into some midair memory and speaking about disgust.
“I was first exposed to the idea of theoretical biology as an undergraduate and I actually hated it,” he says. “I loved biology and I liked math — it was like two different food types that you like but the two of them together are going to be terrible.”
Since then, he has remained a fan of the two foods, and his distaste for combining them has turned into enthusiasm strong enough to build a career on. Agrawal, now a 41-year-old evolutionary geneticist at the University of Toronto, both builds mathematical descriptions of biological processes and leads what he describes as “insanely laborious” experiments with fruit flies, duckweed and microscopic aquatic animals called rotifers. Often experimentalists venturing into theory “dabble and do some stuff, but it’s not very good,” says evolutionary biologist Mark Kirkpatrick of the University of Texas at Austin. Agrawal, however, is “one of the few people who’s doing really good theory and really good experimental work.”
Two of the themes Agrawal works on — the evolution of sex and the buildup over time of harmful mutations — are “very deep and important problems in evolutionary biology,” Kirkpatrick says. Agrawal and colleagues have made a case for a once-fringe idea: that an abundance of harmful mutations can invite even more harmful mutations. Agrawal’s work has also provided rare data to support the idea that the need to adapt to new circumstances has favored sexual over asexual reproduction. Why sexual reproduction is much more common among complex life-forms has been a long-standing puzzle in biology. Life’s complexity appealed to Agrawal from childhood; he remembers days playing among the backyard bugs and frogs in suburban Vancouver. At first, he imagined his grown-up life out in the field, “living in a David Attenborough show.” As he grew older though, he discovered he was a lab animal: “I was more interested in being able to ask more precise questions under more controlled circumstances.”
Sally Otto, now president-elect of the Society for the Study of Evolution, met Agrawal in the 1990s when he was an undergraduate at the University of British Columbia in Vancouver. He returned to Vancouver in 2003, after earning his Ph.D., to do experimental work and “beef up his ability to do theory,” she says. She cosupervised his postdoctoral effort. Agrawal “picks up theory very quickly,” Otto says. Knowing a huge amount of math to begin with is less important than having insight into what math to learn. The first alluring ideas about how to approach a puzzle often don’t work out, she says, so “there’s a certain doggedness — you have to really keep at it.”
Agrawal needed some time before he came around to theoretical biology. It disgusted him, he says, because he expected it to take the rich variety out of biology. “The reason many people, including me, were attracted to biology was because it’s not boxes and triangles,” he says. “It’s complicated and interesting.” At first he thought modeling a biological process mathematically “sterilized it.” But he eventually found that mathematical description could “help to clarify our thinking about the wonderful mess of diversity that’s out there.” At the street bench, Agrawal muses about how he tends to “think quantitatively.” His father has a Ph.D. in engineering, but “we weren’t the kind of family that had to do math problems at the dinner table.” He laughs. “Though I do that to my own kids.” His success so far is mixed, depending in part on whether he catches his two sons, ages 10 and 7, in the right mood. Agrawal also thinks intensely, possibly another secret to his success — he has received more than half a dozen awards and prizes, including the 2015 Steacie Prize for Natural Sciences. The bench where we’ve settled is only half a block from the conference center in Austin, where Evolution 2016, the field’s biggest meeting of the year, has hit day four of its five-day marathon. Agrawal gave one of the first talks, a smooth, perfectly timed zoom through a recent fly experiment. He is a coauthor on five more presentations, along with chairing one of the frenetic sessions where talks are compressed into five minutes. By this point, many of the 1,800 or so attendees are showing strain — wearing name tags wrong side frontward, snoring open-mouthed in hallway chairs or flailing their arms in conversations fueled by way too much caffeine. Agrawal, however, seems relaxed, listening quietly, staring off in thought, speaking in quiet bursts. This guy can focus.
One of his early theory papers studied mutation accumulation. Previous work had suggested that microbes in stressful environments, compared with microbes lapped in luxury, are more likely to make mistakes in copying genes that then get passed on to the next generation. Agrawal wondered whether cells that are stressed for another reason — an already heavy burden of harmful mutations — would likewise be more inclined to build up additional mutations. He calls this scenario “a spiral of doom.”
The idea intrigued him because he suspected that sexual reproduction would do a better job of purging these mutations than asexual reproduction. “What I found in doing the theory was that I was exactly wrong,” he says. The sexual populations would end up with more, not fewer, mutations.
Though the theory part of the paper turned out well, the journal Genetics rejected it — there was hardly any experimental evidence that the scenario would arise in the real world.
Agrawal published the paper elsewhere in 2002 and, when he began setting up his own lab at the University of Toronto, he returned to the idea. In the years since, he and colleagues have published a string of papers adding evidence to the argument. They have found, for example, that fruit flies burdened with misbegotten genes lag in growth and struggle to keep their DNA in good repair. The idea is no longer airy speculation, says Charles Baer, who’s checking for mutation accumulation in nematodes at the University of Florida in Gainesville.
Chrissy Spencer, a postdoc during the early years of Agrawal’s mutation studies, points out that a vital skill of a good experimentalist is just knowing intuitively whether a species is right for a certain kind of test. Agrawal has that knack, for better and for worse. For some studies on the evolution of sex, Agrawal eventually turned to rotifers. The stubby little cylinders with a circlet of hairy projections around their mouths can reproduce either sexually or asexually, so they’re great for testing what factors favor one over the other. Rotifers, however, are also “finicky,” he says. His students have cared for them, sometimes for months, only to have them all die for no discernible reason, sometimes before generating any data.
Having the practitioner’s inside view of experiments and theory may help Agrawal, but it also has its costs. “There are better theoreticians out there and there are better experimentalists,” he says, and he wishes at times that he was more solidly in one camp or the other. He pauses and then, a biologist to the core, says: “That’s my niche.”
DENVER — Life on Earth got into the shell game more than 200 million years earlier than previously thought.
Fossilized eukaryotes — complex life-forms that include animals and plants — discovered in Canada are decked out in armorlike layers of mineral plates, paleobiologist Phoebe Cohen said September 27 at the Geological Society of America’s annual meeting. At about 809 million years old, the find is the oldest evidence of organisms controlling the formation of minerals, a process called biomineralization. This new origin of biomineralization coincides with major changes that mark the end of a period known as the “boring billion” (SN: 11/14/15, p. 18), said Stanford University paleontologist Erik Sperling, who was not involved in the discovery. “There were big things going on with ocean chemistry,” he said. “It’s interesting to see the biological response.”
These ancient eukaryotes built their exoskeletons using a very different process from most modern shell-making microbes. That uniqueness offers insights into how the mineral-making ability first evolved, said Cohen, who studies ancient ecosystems at Williams College in Williamstown, Mass.
“We have been able to identify specific conditions that facilitated the evolution of the first eukaryote to biomineralize in Earth’s history,” she said. “It paints a beautiful picture of the ecology and evolution and environmental conditions that led to this dramatic innovation.”
Donning an exoskeleton of minerals protects microbes from predators and forms a crucial stage in the modern carbon cycle. The shells make marine microbes such as certain phytoplankton species sink faster after they die, removing carbon from the upper ocean. Previous clear evidence of eukaryote biomineralization dates back to around 560 million years ago in early corallike animals.
Odd fossils discovered in the late 1970s and covered in mineral plates shaped like circles, squares and “Honeycomb cereal” (as Cohen described them) hinted that the skill evolved much earlier, but the discovery raised many questions. Dating techniques then put the age of the fossils somewhere within a 100-million-year range from about 811 million to 717 million years ago, and scientists couldn’t rule out that the fossils’ scalelike minerals formed after the organisms died. Cohen and colleagues revisited these curious fossils. By accurately dating the organic-rich shale a few meters below the fossils in the rock record, the researchers pegged the fossils’ age at 809 million years old, give or take a few million years. An electron microscope let researchers see that each plate is a weave of elongated mineral fibers. This intricate, orderly design had to have been purposefully built by life manipulating mineral formation, Cohen said.
The mineral plates themselves are odd. Most modern microbes make shells out of calcium carbonate, but the ancient shells are made of calcium phosphate, the same crystal used in human teeth enamel. Today, phosphate is limited in the environment and most microbes avoid wasting it.
That may not have been as much of an issue in the marine basin where the eukaryotes lived, the researchers found. Analysis of rocks surrounding the fossils indicate that the amount of oxygen in the waters where the eukaryotes lived was inconsistent. Fluctuating oxygen levels pulled phosphate from underlying sediment into the water, where it was available for mineral making. These favorable conditions plus the need for protection from predation (SN: 11/28/15, p. 13) probably drove the first evolution of biomineralization, Cohen said. Eventually the environment changed, and these shell-making species died out.
When small lies snowball into blizzards of deception, the brain becomes numb to dishonesty. As people tell more and bigger lies, certain brain areas respond less to the whoppers, scientists report online October 24 in Nature Neuroscience. The results might help explain how small transgressions can ultimately set pants aflame.
The findings “have big implications for how lying can develop,” says developmental psychologist Victoria Talwar of McGill University in Montreal, who studies how dishonest behavior develops in children. “It starts to give us some idea about how lying escalates from small lies to bigger ones.” During the experiment, researchers from University College London and Duke University showed 80 participants a crisp, big picture of a glass jar of pennies. They were told that they needed to send an estimate of how much money was in the jar to an unseen partner who saw a smaller picture of the same jar. Each participant was serving as a “well-informed financial adviser tasked with advising a client who is less informed about what investments to make,” study coauthor Neil Garrett of University College London said October 20 during a news briefing. Researchers gave people varying incentives to lie. In some cases, for instance, intentionally overestimating the jar’s contents was rewarded with a bigger cut of the money.
As the experiment wore on, the fibs started flying. People lied the most when the lie would benefit both themselves and their unseen partner. But these “financial advisers” still told self-serving lies even when it would hurt their partner.
Twenty-five participants underwent fMRI scans while lying. When a person had previously lied, brain activity lessened in certain areas of the brain, most notably in the amygdala. A pair of almond-shaped brain structures nestled deep in the brain, the amygdalae are tightly linked to emotions. This diminished amygdala activity could even predict whether a person would lie on the next trial, results that suggest that the reduced brain activity is actually influencing the decision to lie.
The study design gets around a problem that confounds other lying experiments, says neuroscientist Bernd Weber of the University of Bonn in Germany. Many experiments are based on lies that people have been instructed to say, a situation that “hardly resembles real-world behavior,” he says. Here, the participants were self-motivated liars.
Without any negative consequences from their lies, participants weren’t afraid of being caught. That impunity might affect activity in the amygdala, Weber says. Further experiments are needed to reveal the effects of such fear. From Ponzi schemes to current politics, case studies abound of small lies spiraling into much bigger deceits, study coauthor Tali Sharot of the University College London said in the news briefing. “There are many reasons why this might happen, societal reasons, but we suspected that there might be a basic biological principle of how our brain works that contributes to this phenomenon,” she said.
The principle she had in mind is called emotional adaptation — the same phenomenon that explains why the scent of strong perfume becomes less noticeable over time. The first time you cheat on your taxes, you’d probably feel quite bad about it, Sharot said. That bad feeling is good, because it curbs your dishonesty. “The next time you cheat, you have already adapted,” she said. “There’s less negative reaction to hold you back so you might be lying more.”
Narwhals use highly targeted beams of sound to scan their environment for threats and food. In fact, the so-called unicorns of the sea (for their iconic head tusks) may produce the most refined sonar of any living animal.
A team of researchers set up 16 underwater microphones to eavesdrop on narwhal click vocalizations at 11 ice pack sites in Greenland’s Baffin Bay in 2013. The recordings show that narwhal clicks are extremely intense and directional — meaning they can widen and narrow the beam of sound to find prey over long and short distances. It’s the most directional sonar signal measured in a living species, the researchers report November 9 in PLOS ONE.
The sound beams are also asymmetrically narrow on top. That minimizes clutter from echoes bouncing off the sea surface or ice pack. Finally, narwhals scan vertically as they dive, which could help them find patches of open water where they can surface and breathe amid sea ice cover. All this means that narwhals employ pretty sophisticated sonar.
The audio data could help researchers tell the difference between narwhal vocalizations and those of neighboring beluga whales. It also provides a baseline for assessing the potential impact of noise pollution from increases in shipping traffic made possible by sea ice loss.
No paper or digital trails document ancient humans’ journey out of Africa to points around the globe. Fortunately, those intrepid travelers left a DNA trail. Genetic studies released in 2016 put a new molecular spin on humans’ long-ago migrations. These investigations also underscore the long trek ahead for scientists trying to reconstruct Stone Age road trips.
“I’m beginning to suspect that the ancient out-of-Africa process was complex, involving several migrations and subsequent extinctions,” says evolutionary geneticist Carles Lalueza-Fox of the Institute of Evolutionary Biology in Barcelona. Untangling those comings, goings and dead ends increasingly looks like a collaborative job for related lines of evolutionary research — comparisons of DNA differences across populations of present-day people, DNA samples retrieved from the bones of ancient hominids, archaeological evidence, fossil finds and studies of ancient climates. It’s still hard to say when the clouds will part and a clear picture of humankind’s journey out of Africa will appear. Consider four papers published in October that featured intriguing and sometimes contradictory results.
Three new studies expanded the list of present-day populations whose DNA has been analyzed. The results suggest that most non-Africans have inherited genes from people who left Africa in a single pulse between about 75,000 and 50,000 years ago (SN: 10/15/16, p. 6). One team, studying DNA from 142 distinct human populations, proposed that African migrants interbred with Neandertals in the Middle East before splitting into groups that headed into Europe or Asia. Other scientists whose dataset included 148 populations concluded that a big move out of Africa during that time period erased most genetic traces of a smaller exodus around 120,000 years ago. A third paper found that aboriginal Australians and New Guinea’s native Papuans descend from a distinctive mix of Eurasian populations that, like ancestors of other living non-Africans, trace back to Africans who left their homeland around 72,000 years ago.
The timing of those migrations may be off, however. A fourth study, based on climate and sea level data, identified the period from 72,000 to 60,000 years ago as a time when deserts largely blocked travel out of Africa. Computer models suggested several favorable periods for intercontinental travel, including one starting around 59,000 years ago. But archaeological finds suggest that humans had already spread across Asia by that time. Clashing estimates of when ancient people left Africa should come as no surprise. To gauge the timing of these migrations, scientists have to choose a rate at which changes in DNA accumulate over time. Evolutionary geneticist Swapan Mallick of Harvard Medical School and the other authors of one of the new genetics papers say that the actual mutation rate could be 30 percent higher or lower than the mutation rate they used. Undetermined levels of interbreeding with now-extinct hominid species other than Neandertals may also complicate efforts to retrace humankind’s genetic history (SN: 10/15/16, p. 22), as would mating between Africans and populations that made return trips. “This can be clarified, to some extent, with genetic data from ancient people involved in out-of-Africa migrations,” says Lalueza-Fox. So far, though, no such data exist.
The uncertainty highlights the need for more archaeological evidence. Though sites exist in Africa and Europe dating from more than 100,000 years ago to 10,000 years ago, little is known about human excursions into the Arabian Peninsula and the rest of Asia. Uncovering more bones, tools and cultural objects will help fill in the picture of how humans traveled, and what key evolutionary transitions occurred along the way.
Mallick’s team has suggested, for example, that symbolic and ritual behavior mushroomed around 50,000 years ago, in the later part of the Stone Age, due to cultural changes rather than genetic changes. Some archaeologists have proposed that genetic changes must have enabled the flourishing of personal ornaments and artifacts that might have been used in rituals. But comparisons of present-day human DNA to that of Neandertals and extinct Asian hominids called Denisovans don’t support that idea. Instead, another camp argues, humans may have been capable of these behaviors some 200,000 years ago.
Nicholas Conard, an archaeologist at the University of Tübingen in Germany, approaches the findings cautiously. “I do not assume that interpretations of the genetic data are right,” he says. Such reconstructions have been revised and corrected many times over the last couple of decades, which is how “a healthy scientific field moves forward,” Conard adds. Collaborations connecting DNA findings to archaeological discoveries are most likely to produce unexpected insights into where we come from and who we are.
Legend has it that hundreds of years ago, a rich, powerful city stood in the jungle of what is now eastern Honduras. Then, suddenly, all of the residents vanished, and the abandoned city became a cursed place — anyone who entered risked death.
In a captivating real-life adventure tale, journalist and novelist Douglas Preston argues that the legend is not complete fiction. The Lost City of the Monkey God is his firsthand account of the expedition that uncovered the sites of at least two large cities, along with other settlements, that may form the basis of the “White City” myth. Even the curse might be rooted in reality. Stories of the White City, so named because it was supposedly built of white stone, trace back to the Spanish conquistadores of the 16th century, Preston explains. These stories enthralled filmmaker Steve Elkins, who set out in the mid-1990s to uncover the truth. Finding the ruins of an ancient culture in one of the most remote parts of Central America would require a combination of high-tech remote sensing, old-fashioned excavation and persistence.
Elkins enlisted the help of experts who used satellite images and lidar to find potential targets to explore. Lidar involves shooting laser pulses from above to sketch out the contours of a surface, even a thickly vegetated one. The resulting maps revealed outlines of human-made structures in several locations. Preston deftly explains the science behind this work and makes it exciting (being crammed into a small, rickety plane for hours on end requires its own kind of bravery). By 2015, archaeologists, accompanied by a film crew and Preston, hit the ground to investigate. They weren’t disappointed. The team uncovered an earthen pyramid, other large mounds, a plaza, terraces, canals, hundreds of ornate sculptures and vessels, and more. These discoveries are providing clues to the identity of the people who lived there and what happened to them. What’s clear is that they belonged to a culture distinct from their Maya neighbors. This culture probably prospered for several hundred years, perhaps longer, before vanishing around 1500. Drawing on historical evidence, Preston argues that disease brought by Europeans was the culture’s downfall. A series of epidemics, perhaps smallpox, may have prompted people to desert the area, inspiring the myth’s curse. The expedition did not escape this curse. Preston and others brought back a parasitic infection known as leishmaniasis. Preston devotes the last quarter of the book to detailing his and others’ struggle to deal with this potentially fatal disease.
The Lost City of the Monkey God is at its best when Preston recounts his time in the field. He presents an unglorified look at doing fieldwork in a rainforest, contending with poisonous snakes, hordes of biting pests and relentless rain and mud. He also offers a window into the politics of science, offering a frank appraisal of the criticism and skepticism this unconventional expedition (paid for by a filmmaker) faced.
Much of the book is a thrill to read, but by the end, it takes a more somber tone. The “White City” faces threats of looting and logging. And researchers who go there risk contracting disease. Some readers may wonder whether the discovery was worth it. Perhaps some mysteries are better left unsolved.
Life on Earth may have made its mark on the moon billions of years before Neil Armstrong’s famous first step.
Observations by Japan’s moon-orbiting Kaguya spacecraft suggest that oxygen atoms from Earth’s upper atmosphere bombard the moon’s surface for a few days each month. This oxygen onslaught began in earnest around 2.4 billion years ago when photosynthetic microbes first flourished (SN Online: 9/8/15), planetary scientist Kentaro Terada of Osaka University in Japan and colleagues propose January 30 in Nature Astronomy.
The oxygen atoms begin their incredible journey in the upper atmosphere, where they are ionized by ultraviolet radiation, the researchers suggest. Electric fields or plasma waves accelerate the oxygen ions into the magnetic cocoon that envelops Earth. One side of that magnetosphere stretches away from the sun like a flag in the wind. For five days each lunar cycle, the moon passes through the magnetosphere and is barraged by earthly ions, including oxygen.
Based on Kaguya’s measurements of this space-traveling oxygen in 2008, Terada and colleagues estimate that at least 26,000 oxygen ions per second hit each square centimeter of the lunar surface during the five-day period. The uppermost lunar soil may, therefore, preserve bits of Earth’s ancient atmosphere, the researchers write, though determining which atoms blew over from Earth or the sun would be difficult.
The first sign that something was wrong was that the female hamsters were really active in their cages. These were European hamsters, a species that is endangered in France and thought to be on the decline in the rest of their Eurasian range. But in a lab at the University of Strasbourg in France, the hamsters were oddly aggressive, and they didn’t give birth in their nests.
Mathilde Tissier, a conservation biologist at the University of Strasbourg, remembers seeing the newly born pups alone, spread around in the cages, while their mothers ran about. Then, the mother hamsters would take their pups and put them in the piles of corn they had stored in the cage, Tissier says, and eat their babies alive.
“I had some really bad moments,” she says. “I thought I had done something wrong.”
Tissier and her colleagues had been looking into the effect of wheat- and corn-based diets in European hamsters because the rodent’s population in France was quickly disappearing. It now numbers only about 1,000 animals, most of which live in farm fields. The hamsters, being burrowers, are important for the local ecosystem and can promote soil health. But more than that, they’re an umbrella species, Tissier notes. Protect them, and their habitat, and there will be benefits for the many other farmland species that are declining.
A typical corn field is some seven times larger than the home range for a female hamster, so the animals that live in these agricultural areas eat mostly corn — or whatever other crop is growing in that field. But not all crops provide the same level of nutrition, and Tissier and her colleagues were curious about how that might affect the hamsters. Perhaps there would be differences in litter size or pup growth, they surmised. So they began an experiment, feeding hamsters wheat or corn in the lab, with either clover or earthworms to better reflect the animals’ normal, omnivorous diets.
“We thought [the diets] would create some [nutritional] deficiencies,” Tissier says. But instead, Tissier and her colleagues saw something very different. All the female hamsters were able to successfully reproduce, but those fed corn showed abnormal behaviors before giving birth. They then gave birth outside their nests and most ate their young on the first day after birth. Only one female weaned her pups, though that didn’t have a happy ending either — the two brothers ate their female siblings, Tissier and her colleagues report January 18 in the Proceedings of the Royal Society B.
Tissier spent a year trying to figure out what was going on. Hamsters and other rodents will eat their young, but it is usually when a baby has died and the mother hamster wants to keep her nest clean. They don’t normally eat healthy babies alive. The researchers reared more hamsters in the lab, this time supplementing their maize and earthworm diet with a solution of niacin. This time, the hamsters raised their young normally, and not as a snack.
Unlike wheat, corn lacks a number of micronutrients, including niacin. In people who subsist on a diet of mostly corn, that niacin deficiency can result in a disease called pellagra. The disease emerged in the 1700s in Europe after corn became a dietary staple. People with pellagra experienced horrible rashes, diarrhea and dementia. Until the disease’s cause was identified in the mid-20th century, millions of people suffered and thousands died. (The meso-Americans who domesticated corn largely did not have this problem because they processed corn with a technique called nixtamalization, which frees bound niacin in corn and makes it available as a nutrient. The Europeans who brought corn back to their home countries didn’t bring back this process.)
The European hamsters fed corn-based diets exhibited symptoms similar to pellagra, and this is probably happening in the wild, Tissier says. She notes that officials with the French National Office for Hunting and Wildlife have seen hamsters in the wild subsisting on mostly corn and eating their pups.
Tissier and her colleagues are now working to find ways to improve diversity in agricultural systems, so that hamsters — and other creatures — can eat a more well-balanced diet. “The idea is not only to protect the hamster,” she says, “but to protect the entire biodiversity and to restore good ecosystems, even in farmland.”