Reading view

A new Martian climate model suggest a mostly cold, harsh environment

The Curiosity rover was sent up the Mount Sharp, the biggest sediments stack on Mars. On the way, it collected samples that indicated a portion of carbon dioxide in the Martian atmosphere might have been sequestered in the sedimentary rocks, just as it happens with limestone on Earth. This would have drawn carbon dioxide out of the atmosphere, reducing the greenhouse effect that warmed the planet.

Based on these findings, a team of scientists led by Benjamin Tutolo, a researcher at the University of Calgary, used this data to conclude Mars had a carbon cycle that could explain the presence of liquid water on its surface. Building on that earlier work, a team led by Edwin Kite, a professor of planetary science at the University of Chicago (and member of the Curiosity science team) has now built the first Martian climate model that took these new results into account. The model also included Martian topography, the luminosity of the Sun, latest orbital data, and many other factors to predict how the Martian conditions and landscape evolved over the span of 3.5 billion years.

Their results mean that any Martian life would have had a rough time of it.

Read full article

Comments

© NASA/JPL-Caltech/Cornell University

  •  

Figuring out why a nap might help people see things in new ways

Dmitri Mendeleev famously saw the complete arrangement of the periodic table after falling asleep on his desk. He claimed in his dream he saw a table where all the elements fell into place, and he wrote it all down when he woke up. By having a eureka moment right after a nap, he joined a club full of rather talented people: Mary Shelley, Thomas Edison, and Salvador Dali.

To figure out if there’s a grain of truth to all these anecdotes, a team of German scientists at the Hamburg University, led by cognitive science researcher Anika T. Löwe, conducted an experiment designed to trigger such nap-following strokes of genius—and catch them in the act with EEG brain monitoring gear. And they kind of succeeded.

Catching Edison’s cup

“Thomas Edison had this technique where he held a cup or something like that when he was napping in his chair,” says Nicolas Schuck, a professor of cognitive science at the Hamburg University and senior author of the study. “When he fell asleep too deeply, the cup falling from his hand would wake him up—he was convinced that was the way to trigger these eureka moments.” While dozing off in a chair with a book or a cup doesn’t seem particularly radical, a number of cognitive scientists got serious about re-creating Edison’s approach to insights and testing it in their experiments.

Read full article

Comments

© XAVIER GALIANA

  •  

A neural brain implant provides near instantaneous speech

Stephen Hawking, a British physicist and arguably the most famous man suffering from amyotrophic lateral sclerosis (ALS), communicated with the world using a sensor installed in his glasses. That sensor used tiny movements of a single muscle in his cheek to select characters on a screen. Once he typed a full sentence at a rate of roughly one word per minute, the text was synthesized into speech by a DECtalk TC01 synthesizer, which gave him his iconic, robotic voice.

But a lot has changed since Hawking died in 2018. Recent brain-computer-interface (BCI) devices have made it possible to translate neural activity directly into text and even speech. Unfortunately, these systems had significant latency, often limiting the user to a predefined vocabulary, and they did not handle nuances of spoken language like pitch or prosody. Now, a team of scientists at the University of California, Davis has built a neural prosthesis that can instantly translate brain signals into sounds—phonemes and words. It may be the first real step we have taken toward a fully digital vocal tract.

Text messaging

“Our main goal is creating a flexible speech neuroprosthesis that enables a patient with paralysis to speak as fluently as possible, managing their own cadence, and be more expressive by letting them modulate their intonation,” says Maitreyee Wairagkar, a neuroprosthetics researcher at UC Davis who led the study. Developing a prosthesis ticking all these boxes was an enormous challenge because it meant Wairagkar’s team had to solve nearly all the problems BCI-based communication solutions have faced in the past. And they had quite a lot of problems.

Read full article

Comments

© UC Regents

  •  

Changing one gene can restore some tissue regeneration to mice

Regeneration is a trick many animals, including lizards, starfish, and octopuses, have mastered. Axolotls, a salamander species originating in Mexico, can regrow pretty much everything from severed limbs, to eyes and parts of brain, to the spinal cord. Mammals, though, have mostly lost this ability somewhere along their evolutionary path. Regeneration persisted, in a limited number of tissues, in just a few mammalian species like rabbits or goats.

“We were trying to learn how certain animals lost their regeneration capacity during evolution and then put back the responsible gene or pathway to reactivate the regeneration program,” says Wei Wang, a researcher at the National Institute of Biological Sciences in Beijing. Wang’s team has found one of those inactive regeneration genes, activated it, and brought back a limited regeneration ability to mice that did not have it before.

Of mice and bunnies

The idea Wang and his colleagues had was a comparative study of how the wound healing process works in regenerating and non-regenerating mammalian species. They chose rabbits as their regenerating mammals and mice as the non-regenerating species. As the reference organ, the team picked the ear pinna. “We wanted a relatively simple structure that was easy to observe and yet composed of many different cell types,” Wang says. The test involved punching holes in the ear pinna of rabbits and mice and tracking the wound-repairing process.

Read full article

Comments

© Corinne von Nordmann

  •  

We have the first video of a plant cell wall being built

Plant cells are surrounded by an intricately structured protective coat called the cell wall. It’s built of cellulose microfibrils intertwined with polysaccharides like hemicellulose or pectin. We have known what plant cells look like without their walls, and we know what they look like when the walls are fully assembled, but we’ve never seen the wall-building process in action. “We knew the starting point and the finishing point, but had no idea what happens in between,” says Eric Lam, a plant biologist at Rutgers University. He’s a co-author of the study that caught wall-building plant cells in action for the first time. And once we saw how the cell wall building worked, it looked nothing like how we drew that in biology handbooks.

Camera-shy builders

Plant cells without walls, known as protoplasts, are very fragile, and it has been difficult to keep them alive under a microscope for the several hours needed for them to build walls. Plant cells are also very light-sensitive, and most microscopy techniques require pointing a strong light source at them to get good imagery.

Then there was the issue of tracking their progress. “Cellulose is not fluorescent, so you can’t see it with traditional microscopy,” says Shishir Chundawat, a biologist at Rutgers. “That was one of the biggest issues in the past.” The only way you can see it is if you attach a fluorescent marker to it. Unfortunately, the markers typically used to label cellulose were either bound to other compounds or were toxic to the plant cells. Given their fragility and light sensitivity, the cells simply couldn’t survive very long with toxic markers as well.

Read full article

Comments

© Hyun Huh et al.

  •  

Bonobos’ calls may be the closest thing to animal language we’ve seen

Bonobos, great apes related to us and chimpanzees that live in the Republic of Congo, communicate with vocal calls including peeps, hoots, yelps, grunts, and whistles. Now, a team of Swiss scientists led by Melissa Berthet, an evolutionary anthropologist at the University of Zurich, discovered bonobos can combine these basic sounds into larger semantic structures. In these communications, meaning is something more than just a sum of individual calls—a trait known as non-trivial compositionality, which we once thought was uniquely human.

To do this, Berthet and her colleagues built a database of 700 bonobo calls and deciphered them using methods drawn from distributional semantics, the methodology we’ve relied on in reconstructing long-lost languages like Etruscan or Rongorongo. For the first time, we have a glimpse into what bonobos mean when they call to each other in the wild.

Context is everything

The key idea behind distributional semantics is that when words appear in similar contexts, they tend to have similar meanings. To decipher an unknown language, you need to collect a large corpus of words and turn those words into vectors—mathematical representations that let you place them in a multidimensional semantic space. The second thing you need is context data, which tells you the circumstances in which these words were used (that gets vectorized, too). When you map your word vectors onto context vectors in this multidimensional space, what usually happens is that words with similar meaning end up close to each other. Berthet and her colleagues wanted to apply the same trick to bonobos’ calls. That seemed straightforward at first glance, but proved painfully hard to execute.

Read full article

Comments

© USO

  •