This ancient marsupial lion had an early version of ‘bolt-cutter’ teeth

A skull and other fossils from northeastern Australia belong to a new species in the extinct family of marsupial lions.

This newly named species, Wakaleo schouteni, was a predator about the size of a border collie, says vertebrate paleontologist Anna Gillespie of the University of New South Wales in Sydney. At least 18 million years ago (and perhaps as early as 23 million years ago), it roamed what were then hot, humid forests. Its sturdy forelimbs suggest it could chase possums, lizards and other small prey up into trees. Gillespie expects W. shouteni — the 10th species named in its family — carried its young in a pouch as kangaroos, koalas and other marsupials do.
Actual lions evolved on a different fork in the mammal genealogical tree, but Australia’s marsupial lions got their feline nickname from the size and slicing teeth of the first species named, in 1859. Thylacoleo carnifex was about as big as a lion. And its formidable teeth could cut flesh. But unlike other pointy-toothed predators, marsupial lions evolved a horizontal cutting edge. A bottom tooth stretched back along the jawline on each side, its slicer edge as long as four regular teeth. An upper tooth extended too, giving this marsupial lion a bite like a “bolt cutter,” Gillespie says.

The newly identified species lived some 17 million years before its big bolt-cutter relative. Though the new species’ tooth number matched those of typical early marsupials, W. schouteni already had a somewhat elongated tooth just in front of the molars, Gillespie and colleagues report December 7 in the Journal of Systematic Paleontology. W. schouteni is “pushing the history of marsupial lions deeper into time,” she says.

Protein helps old blood age the brains of young mice

Old blood can prematurely age the brains of young mice, and scientists may now be closer to understanding how. A protein located in the cells that form a barrier between the brain and blood could be partly to blame, experiments on mice suggest.

If something similar happens in humans, scientists say, methods for countering the protein may hold promise for treating age-related brain decline.

The preliminary study, published online January 3 at bioRxiv.org, focused on a form of the protein known as VCAM1, which interacts with immune cells in response to inflammation. As mice and humans age, levels of that protein circulating in the blood rise, Alzheimer researcher Tony Wyss-Coray at Stanford University and colleagues found.
After injecting young mice behind an eye with plasma from old mice, the team discovered that VCAM1 levels also rose in certain parts of the blood-brain barrier, a mesh of tightly woven cells that protect the brain from harmful factors in the blood. The young mice showed signs of brain deterioration as well, including inflammation and decreased birthrates of new nerve cells. Plasma from young mice had no such effects.

Interfering with VCAM1 may help prevent the premature aging of brains. Plasma from old mice didn’t have a strong effect when injected into young mice genetically engineered to lack VCAM1 in certain blood-brain barrier cells. Nor did it affect mice treated with antibodies that blocked the activity of VCAM1. Those antibodies also seemed to help the brains of older mice that had aged naturally, the team found.

The results suggest that anti-aging treatments targeting specific aspects of the blood-brain barrier may hold promise.

Wikipedia has become a science reference source even though scientists don’t cite it

Wikipedia: The settler of dinnertime disputes and the savior of those who cheat on trivia night. Quick, what country has the Nile’s headwaters? What year did Gershwin write “Rhapsody in Blue”? Wikipedia has the answer to all your burning trivia questions — including ones about science.

With hundreds of thousands of scientific entries, Wikipedia offers a quick reference for the molecular formula of Zoloft, who the inventor of the 3-D printer is and the fact that the theory of plate tectonics is only about 100 years old. The website is a gold mine for science fans, science bloggers and scientists alike. But even though scientists use Wikipedia, they don’t tend to admit it. The site rarely ends up in a paper’s citations as the source of, say, the history of the gut-brain axis or the chemical formula for polyvinyl chloride.
But scientists are browsing Wikipedia just like everyone else. A recent analysis found that Wikipedia stays up-to-date on the latest research — and vocabulary from those Wikipedia articles finds its way into scientific papers. The results don’t just reveal the Wiki-habits of the ivory tower. They also show that the free, widely available information source is playing a role in research progress, especially in poorer countries.

Teachers in middle school, high school and college drill it in to their students: Wikipedia is not a citable source. Anyone can edit Wikipedia, and articles can change from day to day — sometimes by as little as a comma, other times being completely rewritten overnight. “[Wikipedia] has a reputation for being untrustworthy,” says Thomas Shafee, a biochemist at La Trobe University in Melbourne, Australia.

But those same teachers — even the college professors — who warn students away from Wikipedia are using the site themselves. “Academics use Wikipedia all the time because we’re human. It’s something everyone is doing,” says Doug Hanley, a macroeconomist at the University of Pittsburgh.

And the site’s unreliable reputation may be unwarranted. Wikipedia is not any less consistent than Encyclopedia Britannica, a 2005 Nature study showed (a conclusion that the encyclopedia itself vehemently objected to). Citing it as a source, however, is still a bridge too far. “It’s not respected like academic resources,” Shafee notes.
Academic science may not respect Wikipedia, but Wikipedia certainly loves science. Of the roughly 5.5 million articles, half a million to a million of them touch on scientific topics. And constant additions from hundreds of thousands of editors mean that entries can be very up to date on the latest scientific literature.

How recently published findings affect Wikipedia is easy to track. They’re cited on Wikipedia, after all. But does the relationship go the other way? Do scientific posts on Wikipedia worm their way into the academic literature, even though they are never cited? Hanley and his colleague Neil Thompson, an innovation scholar at MIT, decided to approach the question on two fronts.

First, they determined the 1.1 million most common scientific words in published articles from the scientific publishing giant Elsevier. Then, Hanley and Thompson examined how often those same words were added to or deleted from Wikipedia over time, and cited in the research literature. The researchers focused on two fields, chemistry and econometrics — a new area that develops statistical tests for economics.

There was a clear connection between the language in scientific papers and the language on Wikipedia. “Some new topic comes up and it gets exciting, it will generate a new Wikipedia page,” Thompson notes. The language on that new page was then connected to later scientific work. After a new entry was published, Hanley and Thompson showed, later scientific papers contained more language similar to the Wikipedia article than to papers in the field published before the new Wikipedia entry. There was a definite association between the language in the Wikipedia article and future scientific papers.

But was Wikipedia itself the source of that language? This part of the study can’t answer that. It only observes words increasing together in two different spaces. It can’t prove that scientists were reading Wikipedia and using it in their work.

So the researchers created new Wikipedia articles from scratch to find out if the language in them affected the scientific literature in return. Hanley and Thompson had graduate students in chemistry and in econometrics write up new Wikipedia articles on topics that weren’t yet on the site. The students wrote 43 chemistry articles and 45 econometrics articles. Then, half of the articles in each set got published to Wikipedia in January 2015, and the other half were held back as controls. The researchers gave the articles three months to percolate through the internet. Then they examined the next six months’ worth of published scientific papers in those fields for specific language used in the published Wikipedia entries, and compared it to the language in the entries that never got published.

In chemistry, at least, the new topics proved popular. Both the published and control Wikipedia page entries had been selected from graduate level topics in chemistry that weren’t yet covered on Wikipedia. They included entries such as the synthesis of hydrastine (the precursor to a drug that stops bleeding). People were interested enough to view the new articles on average 4,400 times per month.

The articles’ words trickled into to the scientific literature. In the six months after publishing, the entries influenced about 1 in 300 words in the newly published papers in that chemical discipline. And scientific papers on a topic covered in Wikipedia became slightly more like the Wikipedia article over time. For example, if chemists wrote about the synthesis of hydrastine — one of the new Wikipedia articles — published scientific papers more often used phrases like “Passarini reaction,” a term used in the Wikipedia entry. But if an article never went on to Wikipedia, the scientific papers published on the topic didn’t become any more similar to the never-published article (which could have happened if the topics were merely getting more popular). Hanley and Thompson published a preprint of their work to the Social Science Research Network on September 26.

Unfortunately, there was no number of Wikipedia articles that could make econometrics happen. “We wanted something on the edge of a discipline,” Thompson says. But it was a little too edgy. The new Wikipedia entries in that field got one-thirtieth of the views that chemistry articles did. Thompson and Hanley couldn’t get enough data from the articles to make any conclusions at all. Better luck next time, econometrics.

The relationship between Wikipedia entries and the scientific literature wasn’t the same in all regions. When Hanley and Thompson broke the published scientific papers down by the gross domestic product of their countries of origin, they found that Wikipedia articles had a stronger effect on the vocabulary in scientific papers published by scientists in countries with weaker economies. “If you think about it, if you’re a relatively rich country, you have access at your institution to a whole list of journals and the underlying scientific literature,” Hanley notes. Institutions in poorer countries, however, may not be able to afford expensive journal subscriptions, so scientists in those countries may rely more heavily on publicly available sources like Wikipedia.

The Wikipedia study is “excellent research design and very solid analysis,” says Heather Ford, who studies digital politics at the University of Leeds in England. “As far as I know, this is the first paper that attributes a strong link between what is on Wikipedia and the development of science.” But, she says, this is only within chemistry. The influence may be different in different fields.

“It’s addressing a question long in people’s minds but difficult to pin down and prove,” says Shafee. It’s a link, but tracking language, he explains, isn’t the same as finding out how ideas and concepts were moving from Wikipedia into the ivory tower. “It’s a real cliché to say more research is needed, but I think in this case it’s probably true.”

Hanley and Thompson would be the first to agree. “I think about this as a first step,” Hanley says. “It’s showing that Wikipedia is not just a passive resource, it also has an effect on the frontiers of knowledge.”

It’s a good reason for scientists get in and edit entries within their expertise, Thompson notes. “This is a big resource for science and I think we need to recognize that,” Thompson says. “There’s value in making sure the science on Wikipedia is as good and complete as possible.” Good scientific entries might not just settle arguments. They might also help science advance. After all, scientists are watching, even if they won’t admit it.

The wiring for walking developed long before fish left the sea

These fins were made for walking, and that’s just what these fish do — thanks to wiring that evolved long before vertebrates set foot on land.

Little skates use two footlike fins on their undersides to move along the ocean floor. With an alternating left-right stride powered by muscles flexing and extending, the movement of these fish looks a lot like that of many land-based animals.

Now, genetic tests show why: Little skates and land vertebrates share the same genetic blueprint for development of the nerve cells needed for limb movement, researchers report online February 8 in Cell. This work is the first to look at the origins of the neural circuitry needed for walking, the authors say.
“This is fantastically interesting natural history,” says Ted Daeschler, a vertebrate paleontologist at the Academy of Natural Sciences in Philadelphia.

“Neurons essential for us to walk originated in ancient fish species,” says Jeremy Dasen, a neuroscientist at New York University. Based on fossil records, Dasen’s team estimates that the common ancestor of all land vertebrates and skates lived around 420 million years ago — perhaps tens of millions of years before vertebrates moved onto land (SN: 1/14/12, p. 12).
Little skates (Leucoraja erinacea) belong to an evolutionarily primitive group. Skates haven’t changed much since their ancestors split from the fish that evolved into land-rovers, so finding the same neural circuitry in skates and land vertebrates was surprising.

The path to discovery started when Dasen and coauthor Heekyung Jung, now at Stanford University, saw YouTube videos of the little skates walking.

“I was completely flabbergasted,” Dasen says. “I knew some species of fish could walk, but I didn’t know about these.”

Most fish swim by undulating their bodies and tails, but little skates have a spine that remains relatively straight. Instead, little skates move by flapping pancake-shaped pectoral fins and walking on “feet,” two fins tucked along the pelvis.

Measurements of the little skates’ movements found that they were “strikingly similar” to bipedal walking, says Jung, who did the work while at NYU. To investigate how that similarity arose, the researchers looked to motor nerve cells, which are responsible for controlling muscles. Each kind of movement requires different kinds of motor nerve cells, Dasen says.

The building of that neural circuitry is controlled in part by Hox genes, which help set the body plan, where limbs and muscles and nerves should go. For instance, snakes and other animals that have lost some Hox genes have bodies that move in the slinky, slithery undulations that many fish use to swim underwater.

By comparing Hox genes in L. erinacea and mice, researchers discovered that both have Hox6/7 and Hox10 genes and that these genes have similar roles in both. Hox6/7 is important for the development of the neural circuitry used to move the skates’ pectoral fins and the mice’s front legs; Hox10 plays the same role for the footlike fins in little skates and hind limbs in mice. Other genes and neural circuitry for motor control were also conserved, or unchanged, between little skates and mice. The findings suggest that both skates and mice share a common ancestor with similar genetics for locomotion.

The takeaway is that “vertebrates are all very similar to each other,” says Daeschler. “Evolution works by tinkering. We’re all using what we inherited — a tinkered version of circuitry that began 400-plus million years ago.”

In Borneo, hunting emerges as a key threat to endangered orangutans

Orangutan numbers on the Southeast Asian island of Borneo plummeted from 1999 to 2015, more as a result of human hunting than habitat loss, an international research team finds.

Over those 16 years, Borneo’s orangutan population declined by about 148,500 individuals. A majority of those losses occurred in the intact or selectively logged forests where most orangutans live, primatologist Maria Voigt of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, and colleagues report February 15 in Current Biology.
“Orangutan killing is likely the number one threat to orangutans,” says study coauthor Serge Wich, a biologist and ecologist at Liverpool John Moores University in England. Humans hunt the forest-dwelling apes for food, or to prevent them from raiding crops, the investigators say. People also kill adult orangutans to steal their babies for the international pet trade.

Between 70,000 and roughly 100,000 orangutans currently live on Borneo, Wich says. That’s substantially higher than previous population estimates. The new figures are based on the most extensive survey to date, using ground and air monitoring of orangutans’ tree nests. Orangutans live only on Borneo and the island of Sumatra and are endangered in both places.

Still, smaller orangutan populations in deforested areas of Borneo — due to logging or conversion to farm land — experienced the severest rates of decline, up to a 75 percent drop in one region.

Satellite data indicate that Borneo’s forest area has already declined by about 30 percent from 1973 to 2010. In the next 35 years, Voigt’s team calculates that further habitat destruction alone will lead to the loss of around 45,000 more of these apes. “Add hunting to that and it’s a lethal mix,” Wich says. But small groups of Bornean orangutans living in protected zones and selectively logged areas will likely avoid extinction, the researchers say.

50 years ago, early organ transplants brought triumph and tragedy

While the drama of human heart transplants has grasped the public interest, kidney transplants are ahead in the field…. Although only three little girls are now surviving liver transplants, the liver is a promising field for replacement…. The donor, of course, must be dead; no one can live without his liver. — Science News, March 2, 1968

Update
Kidney patients, who could receive organs from family members, had up to a 75 percent one-year survival rate in 1968. Liver recipients were less lucky, having to rely on unrelated, postmortem donations. Liver patients’ immune systems often attacked the new organ and one-year survival was a low 30 percent. Cyclosporine, an immune-suppressing drug available since 1983, has made a big difference. Now, about 75 percent of adults are alive three years after surgery, and children’s odds are even better. The liver is still a must-have organ, and the need for donor livers has climbed. Today, the options have expanded, with split-liver transplants and partial transplants from living donors.

Extreme cold is no match for a new battery

A new type of battery can stand being left out in the cold.

This rechargeable battery churns out charge even at –70° Celsius, a temperature where the typical lithium-ion batteries that power many of today’s cell phones, electric cars and other devices don’t work. Batteries that withstand such frigid conditions could help build electronics that function in some of the coldest places on Earth or on space rovers that cruise around other planets.

Inside lithium-ion batteries, ions flow between positive and negative electrodes, where the ions are embedded and then released to travel back through a substance called an electrolyte to the other end. As the temperature drops, the ions move sluggishly through the electrolyte. The cold also makes it harder for ions to shed the electrolyte material that gloms onto them as they cross the battery. Ions must slough off the matter to fit into the electrode material, explains study coauthor Xiaoli Dong, a battery researcher at Fudan University in Shanghai.
Such cold conditions make conventional lithium-ion batteries less effective. At –40° C, these batteries deliver about 12 percent of the charge they do at room temperature; at –70° C, they don’t work at all.

The new battery, described online February 28 in Joule, contains a special kind of electrolyte that allows ions to flow easily between electrodes even in the bitter cold. The researchers also fitted their battery with electrodes made of organic compounds, rather than the typical transition-metal-rich materials. Ions can lodge themselves in this organic material without having to strip off the electrolyte material stuck to them. So these organic electrodes catch and release ions more easily than electrodes in normal batteries, even at low temps, Dong says.

Because the ions flow better and connect more readily with the electrodes at lower temperatures, the battery retains about 70 percent of its room-temperature charging capacity even at –70° C.
Still, battery cells in the new design pack less energy per gram than standard lithium-ion batteries, says Shirley Meng, a materials scientist at the University of California, San Diego, not involved in the work. She would like to see whether a more energy-dense version of the battery can be built.

Superconductors may shed light on the black hole information paradox

LOS ANGELES ­— Insights into a black hole paradox may come from a down-to-Earth source.

Superconductors, materials through which electrons can move freely without resistance, may share some of the physics of black holes, physicist Sreenath Kizhakkumpurath Manikandan of the University of Rochester in New York reported March 7 at a meeting of the American Physical Society. The analogy between the two objects could help scientists understand what happens to information that gets swallowed up in a black hole’s abyss.
When a black hole gobbles up particles, information about the particles’ properties is seemingly trapped inside. According to quantum mechanics, such information cannot be destroyed. Physicist Stephen Hawking determined in 1974 that black holes slowly evaporate over time, emitting what’s known as Hawking radiation before eventually disappearing. That fact implies a conundrum known as the black hole information paradox (SN: 5/31/14, p. 16): When the black hole evaporates, where does the information go?

One possible solution, proposed in 2007 by physicists Patrick Hayden of Stanford University and John Preskill of Caltech, is that the black hole could act like a mirror, with information about infalling particles being reflected outward, imprinted in the Hawking radiation. Now, Manikandan and physicist Andrew Jordan, also of the University of Rochester, report that a process that occurs at the interface between a metal and a superconductor is analogous to the proposed black hole mirror.

The effect, known as Andreev reflection, occurs when electrons traveling through a metal meet a superconductor. The incoming electron carries a quantum property known as spin, similar to the spinning of a top. The direction of that spin is a kind of quantum information. When the incoming electron meets the superconductor, it pairs up with another electron in the material to form a duo known as a Cooper pair. Those pairings allow electrons to glide easily through the material, facilitating its superconductivity. As the original electron picks up its partner, it also leaves behind a sort of electron alter ego reflecting its information back into the metal. That reflected entity is referred to as a “hole,” a disturbance in a material that occurs when an electron is missing. That hole moves through the metal as if it were a particle, carrying the information contained in the original particle’s spin.

Likewise, if black holes act like information mirrors, as Hayden and Preskill suggested, a particle falling into a black hole would be followed by an antiparticle coming out — a partner with the opposite electric charge — which would carry the information contained in the spin of the original particle. Manikandan and Jordan showed that the two processes were mathematically equivalent.
It’s still not clear whether the black hole mirror is the correct solution to the paradox, but the analogy suggests experiments with superconductors could clarify what happens to the information, Jordan says. “That’s something you can’t ever do with black holes: You can’t do those detailed experiments because they’re off in the middle of some galaxy somewhere.”

The theory is “intriguing,” says physicist Justin Dressel of Chapman University in Orange, Calif. Such comparisons are useful in allowing scientists to take insights from one area and apply them elsewhere. But additional work is necessary to determine how strong an analogy this is, Dressel says. “You may find with further inspection the details are different.”

First pedestrian death from a self-driving car fuels safety debate

The first known pedestrian fatality involving a fully autonomous self-driving car will most likely raise questions about the vehicles’ safety.

But “until we know what happened, we can’t really know what this incident means” for the future of self-driving vehicles, says Philip Koopman, a robotics safety expert at Carnegie Mellon University in Pittsburgh. Only when we know more about the crash, including details on the actions of the pedestrian as well as data logs from the car, can we make judgments, he says.
The incident took place late Sunday night when a self-driving car operated by Uber hit and, ultimately, killed a woman crossing the street in Tempe, Ariz. Early reports indicate that a human safety driver was at the wheel, and the car was in autonomous mode. In response, Uber has suspended testing of its fleet of self-driving cars in Tempe and other cities across the nation. The National Transportation Safety Board is investigating, the New York Times reports.

The NTSB has previously conducted an investigation into the 2016 death of a man who was driving a partly autonomous Tesla, concluding that the driver ignored multiple safety warnings.

Self-driving cars already face high levels of mistrust from other motorists and potential passengers. In a AAA survey in 2017, 85 percent of baby boomers and 73 percent of millennials reported being afraid to ride in self-driving cars (SN Online: 11/21/17).

It is widely accepted by experts such as Koopman that autonomous cars will eventually be safer drivers than the average person, because the vehicles don’t get distracted, among other things. But proving that safety may be time-consuming. A 2016 study by Nidhi Kalra, an information scientist at the RAND Corporation in San Francisco, found that self-driving cars might have to drive on roads for decades to statistically prove their superior safety.
When — or if — self-driving cars are proven safer than human drivers, the vehicles will still have to contend with other questions, such as whether to take steps to protect passengers or pedestrians in a collision (SN: 12/24/16, p. 34).

This spinning moon shows where debris from giant impacts fell

THE WOODLANDS, Texas — A new map of flat, light-colored streaks and splotches on the moon links the features to a few large impacts that spread debris all over the surface. The finding suggests that some of the moon’s history might need rethinking.

Planetary scientist Heather Meyer, now at the Lunar and Planetary Institute in Houston, used data from NASA’s Lunar Reconnaissance Orbiter to make the map, the most detailed global look at these light plains yet. Previous maps had been patched together from different sets of observations, which made it hard to be sure that features that looked like plains actually were.
Astronomers originally assumed that the light plains were ancient lava flows from volcanoes. But rocks brought back from one of these plains by Apollo 16 astronauts in 1972 did not have volcanic compositions. That finding led some scientists to suspect the plains, which cover about 9.5 percent of the lunar surface, came from giant impacts.

Meyer’s map supports the impact idea. Most of the plains, which are visible across the whole moon, seem to originate from debris spewed from the Orientale basin, a 930-kilometer-wide bowl in the moon’s southern hemisphere that formed about 3.8 billion years ago.
“It looks like there’s just a giant splat mark,” Meyer says. About 70 percent of the lunar plains come from either Orientale or one other similar basin, she reported March 22 at the Lunar and Planetary Science Conference. “What this is telling us,” she says, “is these large basins modified the entire lunar surface at some point.”
The map also shows that some small impact craters up to 2,000 kilometers from Orientale have been filled in with plains material. That’s potentially problematic, because planetary scientists use the number of small impact craters to estimate the age of the lunar surface. If small craters have been erased by an impact half a moon away, that could mean some of the surface is older than it looks, potentially changing scientists’ interpretations of the moon’s history (SN: 6/11/16, p. 10).