For some poison dart frogs, gaining resistance to one of their own toxins came with a price.
The genetic change that gives one group of frogs immunity to a particularly lethal toxin also disrupts a key chemical messenger in the brain. But the frogs have managed to sidestep the potentially damaging side effect through other genetic tweaks, researchers report in the Sept. 22 Science.
While other studies have identified genetic changes that give frogs resistance to particular toxins, this study “lets you look under the hood” to see the full effects of those changes and how the frogs are compensating, says Butch Brodie, an evolutionary biologist at the University of Virginia in Charlottesville who wasn’t involved in the research. Many poison dart frogs carry cocktails of toxic alkaloid molecules in their skin as a defense against predators (SN Online: 3/24/14). These toxins, picked up through the frogs’ diets, vary by species. Here, researchers studied frogs that carry epibatidine, a substance so poisonous that just a few millionths of a gram can kill a mouse.
Previous studies have shown that poisonous frogs have become resistant to the toxins the amphibians carry by messing with the proteins that these toxins bind to in the body. Switching out certain protein building blocks, or amino acids, changes the shape of the protein, which can prevent toxins from latching on. But making that change could have unintended side effects, too, says study coauthor Rebecca Tarvin, an evolutionary biologist at the University of Texas at Austin.
For example, the toxin epibatidine binds to proteins that are usually targeted by acetylcholine, a chemical messenger that’s necessary for normal brain function. So Tarvin and her colleagues looked at how this acetylcholine receptor protein differed between poison frog species that are resistant to epibatidine and some of their close relatives that aren’t. Identifying differences between the frogs in the receptor protein’s amino acids allowed researchers to systematically test the effects of each change. To do so, the scientists put the genetic instructions for the same protein in humans, who aren’t resistant to epibatidine, into frog eggs. The researchers then replaced select amino acids in the human code with different poison frog substitutions to find an amino acid “switch” that would make the resulting receptor protein resistant to epibatidine.
But epibatidine resistance wasn’t a straightforward deal, it turned out. “We noticed that replacing one of those amino acids in the human [protein] made it resistant to epibatidine, but also affected its interaction with acetylcholine,” says study coauthor Cecilia Borghese, a neuropharmacologist also at the University of Texas at Austin. “Both are binding in the exact same region of the protein. It’s a very delicate situation.” That is, the amino acid change that made the receptor protein resistant to epibatidine also made it harder for acetylcholine to attach, potentially impeding the chemical messenger’s ability to do its job.
But the frogs themselves don’t seem impaired. That’s because other amino acid replacements elsewhere in the receptor protein appear to have compensated, Borghese and Tarvin found, creating a protein that won’t let the toxin latch on, but that still responds normally to acetylcholine.
The resistance-giving amino acid change appears to have evolved three separate times in poison frogs, Tarvin says. Three different lineages of the frogs have resistance to the poison, and all of them got that immunity by flipping the same switch. But the amino acid changes that bring back a normal acetylcholine response aren’t the same across those three groups.
“It’s a cool convergence that these other switches weren’t identical, but they all seem to recover that function,” Brodie says.
Six years after the Fukushima nuclear reactor disaster in Japan, radioactive material is leaching into the Pacific Ocean from an unexpected place. Some of the highest levels of radioactive cesium-137, a major by-product of nuclear power generation, are now found in the somewhat salty groundwater beneath sand beaches tens of kilometers away, a new study shows.
Scientists tested for radioactivity at eight different beaches within 100 kilometers of the plant, which experienced three reactor meltdowns when an earthquake and tsunami on March 11, 2011, knocked out its power. Oceans, rivers and fresh groundwater sources are typically monitored for radioactivity following a nuclear accident, but several years following the disaster, those weren’t the most contaminated water sources. Instead, brackish groundwater underneath the beaches has accumulated the second highest levels of the radioactive element (surpassed only by the groundwater directly beneath the reactor), researchers report October 2 in the Proceedings of the National Academy of Sciences.
In the wake of the 2011 accident, seawater tainted with high levels of cesium-137 probably traveled along the coast and lapped against these beaches, proposes study coauthor Virginie Sanial, who did the work while at Woods Hole Oceanographic Institution in Massachusetts. Some cesium stuck to the sand and, over time, percolated down to the brackish groundwater beneath. Now, the radioactive material is steadily making its way back into the ocean. The groundwater is releasing the cesium into the coastal ocean at a rate that’s on par with the leakage of cesium into the ocean from the reactor site itself, Sanial’s team estimates.
Since this water isn’t a source of drinking water and is underground, the contamination isn’t an immediate public health threat, says Sanial, now a geochemist at the University of Southern Mississippi in Hattiesburg. But with about half of the world’s nuclear power plants located on coastlines, such areas are potentially important contamination reservoirs and release sites to monitor after future accidents.
Vaping e-cigarettes with high amounts of nicotine appears to impact how often and how heavily teens smoke and vape in the future, a new study finds.
In 2016, an estimated 11 percent of U.S. high school students used e-cigarettes. Past research has found that that teen vaping can lead to smoking (SN: 9/19/15, p. 14). The new study, published online October 23 in JAMA Pediatrics, is the first look at whether vaping higher amounts of nicotine is associated with more frequent and more intense vaping and cigarette use in the future. Researchers at the University of Southern California surveyed 181 10th-graders from 10 high schools in the Los Angeles area who had reported vaping in the previous 30 days, then followed up six months later, when the students were 11th-graders. The teens answered questions about how much and how often they had smoked and vaped in the past 30 days and about the amount of nicotine in their vaping liquid. The researchers categorized the amount of nicotine as none, low (up to 5 milligrams per milliliter), medium (6 to 17 mg/mL) or high (18 mg/mL or more).
With each step up in nicotine concentration, teens were about twice as likely to report frequent smoking versus no smoking at the six-month follow-up. Teens who vaped a high-nicotine liquid smoked seven times as many cigarettes per day as those who vaped without nicotine.
Also with each nicotine level increase, teens were about 1½ times as likely to report frequent vaping than no vaping at all. Vaping high-nicotine liquid led to almost 2½ times as many episodes of vaping per day compared with no-nicotine vaping, and kids took more puffs each time they vaped.
“This study is important because it begins to chip away at the ‘black box’ that links e-cigarette use with later use of regular cigarettes,” says sociologist Richard Miech of the University of Michigan in Ann Arbor. “Ideally, studies like this will encourage government agencies to develop policies that will make it very difficult for youth to obtain e-liquids with nicotine.” In 2016, then-U.S. Surgeon General Vivek Murthy released a report on e-cigarettes, concluding that using nicotine-containing products in any form is not safe for youth. Studies find an association between nicotine use in teens and problems with learning, attention and impulse control, as well as addiction (SN: 7/11/15, p. 18).
A soft heart keeps Enceladus warm from the inside. Friction within its porous core could help Saturn’s icy moon maintain a liquid ocean for billions of years and explain why it sprays plumes from its south pole, astronomers report November 6 in Nature Astronomy.
Observations in 2015 showed that Enceladus’ icy surface is a shell that’s completely detached from its rocky core, meaning the ocean spans the entire globe (SN: 10/17/15, p. 8). Those measurements also showed that the ice is not thick enough to keep the ocean liquid. Other icy moons, like Jupiter’s Europa, keep subsurface oceans warm through the energy generated by gravitational flexing of the ice itself. But if that were Enceladus’ only heat source, its ocean would have frozen within 30 million years, a fraction of the age of the solar system, which formed roughly 4.6 billion years ago.
Planetary scientist Gaël Choblet of the University of Nantes in France and his colleagues tested whether friction in the sand and gravel thought to make up Enceladus’ core could heat things up.
The team made computer simulations of water circulating through the spongy core using data from the Cassini spacecraft and geoengineering experiments with sand and gravel on Earth. They found that, depending on the core’s makeup, the ocean should get enough heat to stay liquid for tens of millions to billions of years.
The simulations also showed that certain hot spots in the core, including at the poles, correspond to regions where the ice shell is thinner. “That was quite cool,” Choblet says. “It explains the internal structure and the way things are organized and the dynamics interior to Enceladus.”
And that could explain why the moon spews plumes of water from its south pole: More heat from the core at that spot could melt the ice and let water out. It doesn’t explain why the north pole is plume-free, though.
Recent reports of African and North American animal fossils bearing stone-tool marks from being butchered a remarkably long time ago may be a crock. Make that a croc.
Crocodile bites damage animal bones in virtually the same ways that stone tools do, say paleoanthropologist Yonatan Sahle of the University of Tübingen in Germany and his colleagues. Animal bones allegedly cut up for meat around 3.4 million years ago in East Africa (SN: 9/11/10, p. 8) and around 130,000 years ago in what’s now California (SN: 5/27/17, p. 7) come from lakeside and coastal areas. Those are places where crocodiles could have wreaked damage now mistaken for butchery, the scientists report online the week of November 6 in the Proceedings of the National Academy of Sciences. Larger samples of animal fossils, including complete bones from various parts of the body, are needed to begin to tease apart the types of damage caused by stone tools, crocodile bites and trampling of bones by living animals, Sahle’s team concludes. “More experimental work on bone damage caused by big, hungry crocs is also critical,” says coauthor Tim White, a paleoanthropologist at the University of California, Berkeley.
In a field where researchers reap big rewards for publishing media-grabbing results in high-profile journals, such evidence could rein in temptations to over-interpret results, says archaeologist David Braun of George Washington University in Washington, D.C., who did not participate in the new study or the two earlier ones. “There’s a push to publish extraordinary findings, but evolutionary researchers always have to weigh what’s interesting versus what’s correct.”
Authors of the ancient butchery papers agree that bone marks made by crocodiles deserve closer study and careful comparison with proposed stone-tool marks. But the researchers stand their ground on their original conclusions.
Microscopic investigations in the 1980s led some researchers to conclude that carnivores such as hyenas leave U-shaped marks on bones. In contrast, they argued, stone tools leave V-shaped incisions with internal ridges. And hammering stones create signature pits and striations. Sahle’s group expanded on research previously conducted by paleoanthropologist Jackson Njau of Indiana University Bloomington. In his 2006 doctoral dissertation, Njau reported that bone damage produced by feeding crocodiles looks much like stone-tool incisions and pits, with a few distinctive twists such as deep scratches. Njau retrieved and studied cow and goat bones from carcasses that had been eaten by crocodiles housed at two animal farms in Tanzania.
In the new study, the scientists used Njau’s findings to reassess marks on fossils previously excavated in Ethiopia and dating to around 4.2 million, 3.4 million and 2.5 million years ago. Damage to these fossils has generally been attributed to butchery with stone tools.
Incisions and pits on arm bones from an ancient hominid, Australopithecus anamensis, and similar marks on a horse’s leg bone likely resulted from crocodile bites and not stone-tool use, as initially suspected, the investigators say. If stone tools had indeed damaged the A. anamensis remains, that would raise the possibility of cannibalism — a difficult behavior to confirm with fossils. Tellingly, Sahle’s team argues, these bones come from what were once waterside areas. Some were found in the same sediment layer as crocodile remains. Marks on these bones include deep scratches consistent with crocodile bites.
The horse fossil comes from a spot along an ancient lakeshore where no stone tools have been found, a further clue in favor of damage from croc bites.
Jagged pits, incisions and other marks scar a leg fragment and lower jaw from an ancient hoofed animal. But microscopic analyses could not definitively attribute the damage to stone tools or crocodile bites.
In light of these findings, the ancient California and 3.4-million-year-old East Africa bones should also be reexamined with the possibility of croc damage in mind, White says. For now, the earliest confirmed stone-tool marks occur on animal bones from two East African sites dating to around 2.5 million years ago (SN: 4/17/04, p. 254), he adds.
The range of crocodile marks described in the new study doesn’t look “especially like” damage to the 130,000-year-old mastodon bones on California’s coast, says paleontologist Daniel Fisher of the University of Michigan in Ann Arbor, a coauthor of the ancient California bones paper. No fossil evidence indicates crocodiles lived there at that time, he adds. Several lines of evidence, including pounding marks and damage near joints, point to stone-tool use at the West Coast site, says archaeologist Richard Fullagar of the University of Wollongong in Australia, also a coauthor of the mastodon paper.
Further studies of the 3.4-million-year-old African bones previously reported as probable examples of animal butchery will statistically compare the probability of various causes for particular marks, including crocodile bites, says Shannon McPherron, the lead author of the earlier study and an archaeologist at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. In that way, researchers can assess whether any one cause stands out as the strongest candidate.
Two days before plunging into Saturn, the Cassini spacecraft took one last look around the planet it had orbited for more than 13 years.
The view of Saturn above, released November 21, is actually made from 42 images that have been stitched together. Six moons — Enceladus, Epimetheus, Janus, Mimas, Pandora and Prometheus — are faintly visible as dots surrounding the gas giant (see the annotated image below). Cassini was about 1.1 million kilometers away from Saturn when it took the images on September 13. The whole observation took a little over two hours.
On September 11, Cassini set itself on a collision course with Saturn, and on September 15, the probe ended its mission by burning up in Saturn’s atmosphere, taking data all the way down.
A skull and other fossils from northeastern Australia belong to a new species in the extinct family of marsupial lions.
This newly named species, Wakaleo schouteni, was a predator about the size of a border collie, says vertebrate paleontologist Anna Gillespie of the University of New South Wales in Sydney. At least 18 million years ago (and perhaps as early as 23 million years ago), it roamed what were then hot, humid forests. Its sturdy forelimbs suggest it could chase possums, lizards and other small prey up into trees. Gillespie expects W. shouteni — the 10th species named in its family — carried its young in a pouch as kangaroos, koalas and other marsupials do. Actual lions evolved on a different fork in the mammal genealogical tree, but Australia’s marsupial lions got their feline nickname from the size and slicing teeth of the first species named, in 1859. Thylacoleo carnifex was about as big as a lion. And its formidable teeth could cut flesh. But unlike other pointy-toothed predators, marsupial lions evolved a horizontal cutting edge. A bottom tooth stretched back along the jawline on each side, its slicer edge as long as four regular teeth. An upper tooth extended too, giving this marsupial lion a bite like a “bolt cutter,” Gillespie says.
The newly identified species lived some 17 million years before its big bolt-cutter relative. Though the new species’ tooth number matched those of typical early marsupials, W. schouteni already had a somewhat elongated tooth just in front of the molars, Gillespie and colleagues report December 7 in the Journal of Systematic Paleontology. W. schouteni is “pushing the history of marsupial lions deeper into time,” she says.
Old blood can prematurely age the brains of young mice, and scientists may now be closer to understanding how. A protein located in the cells that form a barrier between the brain and blood could be partly to blame, experiments on mice suggest.
If something similar happens in humans, scientists say, methods for countering the protein may hold promise for treating age-related brain decline.
The preliminary study, published online January 3 at bioRxiv.org, focused on a form of the protein known as VCAM1, which interacts with immune cells in response to inflammation. As mice and humans age, levels of that protein circulating in the blood rise, Alzheimer researcher Tony Wyss-Coray at Stanford University and colleagues found. After injecting young mice behind an eye with plasma from old mice, the team discovered that VCAM1 levels also rose in certain parts of the blood-brain barrier, a mesh of tightly woven cells that protect the brain from harmful factors in the blood. The young mice showed signs of brain deterioration as well, including inflammation and decreased birthrates of new nerve cells. Plasma from young mice had no such effects.
Interfering with VCAM1 may help prevent the premature aging of brains. Plasma from old mice didn’t have a strong effect when injected into young mice genetically engineered to lack VCAM1 in certain blood-brain barrier cells. Nor did it affect mice treated with antibodies that blocked the activity of VCAM1. Those antibodies also seemed to help the brains of older mice that had aged naturally, the team found.
The results suggest that anti-aging treatments targeting specific aspects of the blood-brain barrier may hold promise.
Wikipedia: The settler of dinnertime disputes and the savior of those who cheat on trivia night. Quick, what country has the Nile’s headwaters? What year did Gershwin write “Rhapsody in Blue”? Wikipedia has the answer to all your burning trivia questions — including ones about science.
With hundreds of thousands of scientific entries, Wikipedia offers a quick reference for the molecular formula of Zoloft, who the inventor of the 3-D printer is and the fact that the theory of plate tectonics is only about 100 years old. The website is a gold mine for science fans, science bloggers and scientists alike. But even though scientists use Wikipedia, they don’t tend to admit it. The site rarely ends up in a paper’s citations as the source of, say, the history of the gut-brain axis or the chemical formula for polyvinyl chloride. But scientists are browsing Wikipedia just like everyone else. A recent analysis found that Wikipedia stays up-to-date on the latest research — and vocabulary from those Wikipedia articles finds its way into scientific papers. The results don’t just reveal the Wiki-habits of the ivory tower. They also show that the free, widely available information source is playing a role in research progress, especially in poorer countries.
Teachers in middle school, high school and college drill it in to their students: Wikipedia is not a citable source. Anyone can edit Wikipedia, and articles can change from day to day — sometimes by as little as a comma, other times being completely rewritten overnight. “[Wikipedia] has a reputation for being untrustworthy,” says Thomas Shafee, a biochemist at La Trobe University in Melbourne, Australia.
But those same teachers — even the college professors — who warn students away from Wikipedia are using the site themselves. “Academics use Wikipedia all the time because we’re human. It’s something everyone is doing,” says Doug Hanley, a macroeconomist at the University of Pittsburgh.
And the site’s unreliable reputation may be unwarranted. Wikipedia is not any less consistent than Encyclopedia Britannica, a 2005 Nature study showed (a conclusion that the encyclopedia itself vehemently objected to). Citing it as a source, however, is still a bridge too far. “It’s not respected like academic resources,” Shafee notes. Academic science may not respect Wikipedia, but Wikipedia certainly loves science. Of the roughly 5.5 million articles, half a million to a million of them touch on scientific topics. And constant additions from hundreds of thousands of editors mean that entries can be very up to date on the latest scientific literature.
How recently published findings affect Wikipedia is easy to track. They’re cited on Wikipedia, after all. But does the relationship go the other way? Do scientific posts on Wikipedia worm their way into the academic literature, even though they are never cited? Hanley and his colleague Neil Thompson, an innovation scholar at MIT, decided to approach the question on two fronts.
First, they determined the 1.1 million most common scientific words in published articles from the scientific publishing giant Elsevier. Then, Hanley and Thompson examined how often those same words were added to or deleted from Wikipedia over time, and cited in the research literature. The researchers focused on two fields, chemistry and econometrics — a new area that develops statistical tests for economics.
There was a clear connection between the language in scientific papers and the language on Wikipedia. “Some new topic comes up and it gets exciting, it will generate a new Wikipedia page,” Thompson notes. The language on that new page was then connected to later scientific work. After a new entry was published, Hanley and Thompson showed, later scientific papers contained more language similar to the Wikipedia article than to papers in the field published before the new Wikipedia entry. There was a definite association between the language in the Wikipedia article and future scientific papers.
But was Wikipedia itself the source of that language? This part of the study can’t answer that. It only observes words increasing together in two different spaces. It can’t prove that scientists were reading Wikipedia and using it in their work.
So the researchers created new Wikipedia articles from scratch to find out if the language in them affected the scientific literature in return. Hanley and Thompson had graduate students in chemistry and in econometrics write up new Wikipedia articles on topics that weren’t yet on the site. The students wrote 43 chemistry articles and 45 econometrics articles. Then, half of the articles in each set got published to Wikipedia in January 2015, and the other half were held back as controls. The researchers gave the articles three months to percolate through the internet. Then they examined the next six months’ worth of published scientific papers in those fields for specific language used in the published Wikipedia entries, and compared it to the language in the entries that never got published.
In chemistry, at least, the new topics proved popular. Both the published and control Wikipedia page entries had been selected from graduate level topics in chemistry that weren’t yet covered on Wikipedia. They included entries such as the synthesis of hydrastine (the precursor to a drug that stops bleeding). People were interested enough to view the new articles on average 4,400 times per month.
The articles’ words trickled into to the scientific literature. In the six months after publishing, the entries influenced about 1 in 300 words in the newly published papers in that chemical discipline. And scientific papers on a topic covered in Wikipedia became slightly more like the Wikipedia article over time. For example, if chemists wrote about the synthesis of hydrastine — one of the new Wikipedia articles — published scientific papers more often used phrases like “Passarini reaction,” a term used in the Wikipedia entry. But if an article never went on to Wikipedia, the scientific papers published on the topic didn’t become any more similar to the never-published article (which could have happened if the topics were merely getting more popular). Hanley and Thompson published a preprint of their work to the Social Science Research Network on September 26.
Unfortunately, there was no number of Wikipedia articles that could make econometrics happen. “We wanted something on the edge of a discipline,” Thompson says. But it was a little too edgy. The new Wikipedia entries in that field got one-thirtieth of the views that chemistry articles did. Thompson and Hanley couldn’t get enough data from the articles to make any conclusions at all. Better luck next time, econometrics.
The relationship between Wikipedia entries and the scientific literature wasn’t the same in all regions. When Hanley and Thompson broke the published scientific papers down by the gross domestic product of their countries of origin, they found that Wikipedia articles had a stronger effect on the vocabulary in scientific papers published by scientists in countries with weaker economies. “If you think about it, if you’re a relatively rich country, you have access at your institution to a whole list of journals and the underlying scientific literature,” Hanley notes. Institutions in poorer countries, however, may not be able to afford expensive journal subscriptions, so scientists in those countries may rely more heavily on publicly available sources like Wikipedia.
The Wikipedia study is “excellent research design and very solid analysis,” says Heather Ford, who studies digital politics at the University of Leeds in England. “As far as I know, this is the first paper that attributes a strong link between what is on Wikipedia and the development of science.” But, she says, this is only within chemistry. The influence may be different in different fields.
“It’s addressing a question long in people’s minds but difficult to pin down and prove,” says Shafee. It’s a link, but tracking language, he explains, isn’t the same as finding out how ideas and concepts were moving from Wikipedia into the ivory tower. “It’s a real cliché to say more research is needed, but I think in this case it’s probably true.”
Hanley and Thompson would be the first to agree. “I think about this as a first step,” Hanley says. “It’s showing that Wikipedia is not just a passive resource, it also has an effect on the frontiers of knowledge.”
It’s a good reason for scientists get in and edit entries within their expertise, Thompson notes. “This is a big resource for science and I think we need to recognize that,” Thompson says. “There’s value in making sure the science on Wikipedia is as good and complete as possible.” Good scientific entries might not just settle arguments. They might also help science advance. After all, scientists are watching, even if they won’t admit it.
These fins were made for walking, and that’s just what these fish do — thanks to wiring that evolved long before vertebrates set foot on land.
Little skates use two footlike fins on their undersides to move along the ocean floor. With an alternating left-right stride powered by muscles flexing and extending, the movement of these fish looks a lot like that of many land-based animals.
Now, genetic tests show why: Little skates and land vertebrates share the same genetic blueprint for development of the nerve cells needed for limb movement, researchers report online February 8 in Cell. This work is the first to look at the origins of the neural circuitry needed for walking, the authors say. “This is fantastically interesting natural history,” says Ted Daeschler, a vertebrate paleontologist at the Academy of Natural Sciences in Philadelphia.
“Neurons essential for us to walk originated in ancient fish species,” says Jeremy Dasen, a neuroscientist at New York University. Based on fossil records, Dasen’s team estimates that the common ancestor of all land vertebrates and skates lived around 420 million years ago — perhaps tens of millions of years before vertebrates moved onto land (SN: 1/14/12, p. 12). Little skates (Leucoraja erinacea) belong to an evolutionarily primitive group. Skates haven’t changed much since their ancestors split from the fish that evolved into land-rovers, so finding the same neural circuitry in skates and land vertebrates was surprising.
The path to discovery started when Dasen and coauthor Heekyung Jung, now at Stanford University, saw YouTube videos of the little skates walking.
“I was completely flabbergasted,” Dasen says. “I knew some species of fish could walk, but I didn’t know about these.”
Most fish swim by undulating their bodies and tails, but little skates have a spine that remains relatively straight. Instead, little skates move by flapping pancake-shaped pectoral fins and walking on “feet,” two fins tucked along the pelvis.
Measurements of the little skates’ movements found that they were “strikingly similar” to bipedal walking, says Jung, who did the work while at NYU. To investigate how that similarity arose, the researchers looked to motor nerve cells, which are responsible for controlling muscles. Each kind of movement requires different kinds of motor nerve cells, Dasen says.
The building of that neural circuitry is controlled in part by Hox genes, which help set the body plan, where limbs and muscles and nerves should go. For instance, snakes and other animals that have lost some Hox genes have bodies that move in the slinky, slithery undulations that many fish use to swim underwater.
By comparing Hox genes in L. erinacea and mice, researchers discovered that both have Hox6/7 and Hox10 genes and that these genes have similar roles in both. Hox6/7 is important for the development of the neural circuitry used to move the skates’ pectoral fins and the mice’s front legs; Hox10 plays the same role for the footlike fins in little skates and hind limbs in mice. Other genes and neural circuitry for motor control were also conserved, or unchanged, between little skates and mice. The findings suggest that both skates and mice share a common ancestor with similar genetics for locomotion.
The takeaway is that “vertebrates are all very similar to each other,” says Daeschler. “Evolution works by tinkering. We’re all using what we inherited — a tinkered version of circuitry that began 400-plus million years ago.”