https://techxplore.com/news/2022-01-machine-intelligence-soft-machines.html


JANUARY 27, 2022

Machine intelligence builds soft machines

by University of Maryland

Machine intelligence builds soft machines
Fig. 1: Three-stage framework for construction of a machine learning-enabled prediction model capable of automatic strain sensor design for soft machines. Credit: DOI: 10.1038/s42256-021-00434-8

Soft machines—a subcategory of robotics that uses deformable materials instead of rigid links—are an emerging technology commonly used in wearable robotics and biomimetics (e.g., prosthetic limbs). Soft robots offer remarkable flexibility, outstanding adaptability, and evenly distributed force, providing safer human-machine interactions than conventional hard and stiff robots.

An essential component of soft machines is the high-precision strain sensor to monitor the strain changes of each soft body unit and achieve a high-precision control loop, while several new challenges await. First, the complex movements of soft machines require the strain sensors to monitor a wide strain range from <5% to >200%, which exceed the capabilities of conventional strain sensors. Second, to monitor the coordinated motions of a soft machine, multiple strain sensors are required to satisfy different sensing tasks for separate robotic units, which demand tedious trial-and-error tests.

To circumvent this problem, a University of Maryland (UMD) research team led by Po-Yen Chen—a professor of chemical and biomolecular engineering at UMD with a duel appointment in the Maryland Robotics Center—has created a machine learning (ML) framework to facilitate the construction of a prediction model, which can be utilized to conduct the two-way design tasks: (1) predict sensor performance based on a fabrication recipe and (2) recommend feasible fabrication recipes for adequate strain sensors. In a nutshell, the group has designed a machine intelligence that accelerates the design of soft machines.

“What we’ve essentially created is a high-accuracy prediction software—based on a machine learning framework—capable of designing a wide range of strain sensors that can be integrated into diverse soft machines,” said Chen. “To use a food analogy, we gave a list of ingredients to a ‘chef,’ and that chef is able to design the perfect meal based on individual tastes of the customer.”

This technology can be used in the fields of advanced manufacturing, underwater robot design, prosthesis design, and beyond.

This study was published in Nature Machine Intelligence on January 26, 2022. To learn more about Dr. Chen’s work, please visit the group website.


Explore furtherStretchable pressure sensor could lead to better robotics, prosthetics


More information: Haitao Yang et al, Automatic strain sensor design via active learning and data augmentation for soft machines, Nature Machine Intelligence (2022). DOI: 10.1038/s42256-021-00434-8Journal information:Nature Machine IntelligenceProvided by University of Maryland

https://www.businesslive.co.za/bt/business-and-economy/2022-01-30-musk-bets-on-beating-human-drivers-this-year-and-on-humanoid-factory-workers/

Musk bets on beating human drivers this year — and on humanoid factory workers

Audacious promises face major challenges, from technology to regulation

BL PREMIUM30 JANUARY 2022 – 07:16 AGENCY STAFF

Tesla’s most important products this year and next will not be cars, CEO Elon Musk said this week, but software that drives them autonomously and a humanoid robot the company expects will help out in the factory.

The audacious promises by the best-known billionaire in the electric car industry face major challenges, from technology to regulation. Tesla and other auto tech companies have missed their targets for self-driving software for years..

https://scitechdaily.com/twist-mits-new-programming-language-for-quantum-computing/


Twist: MIT’s New Programming Language for Quantum Computing

TOPICS:Computer ScienceCSAILMITQuantum Computing

By RACHEL GORDON, MIT CSAIL JANUARY 26, 2022

Technology Communications Programming AI Concept

Time crystals. Microwaves. Diamonds. What do these three disparate things have in common?

Quantum computing. Unlike traditional computers that use bits, quantum computers use qubits to encode information as zeros or ones, or both at the same time. Coupled with a cocktail of forces from quantum physics, these refrigerator-sized machines can process a whole lot of information — but they’re far from flawless. Just like our regular computers, we need to have the right programming languages to properly compute on quantum computers.

Programming quantum computers requires awareness of something called “entanglement,” a computational multiplier for qubits of sorts, which translates to a lot of power. When two qubits are entangled, actions on one qubit can change the value of the other, even when they are physically separated, giving rise to Einstein’s characterization of “spooky action at a distance.” But that potency is equal parts a source of weakness. When programming, discarding one qubit without being mindful of its entanglement with another qubit can destroy the data stored in the other, jeopardizing the correctness of the program.

Scientists from MIT’s Computer Science and Artificial Intelligence (CSAIL) aimed to do some unraveling by creating their own programming language for quantum computing called Twist. Twist can describe and verify which pieces of data are entangled in a quantum program, through a language a classical programmer can understand. The language uses a concept called purity, which enforces the absence of entanglement and results in more intuitive programs, with ideally fewer bugs. For example, a programmer can use Twist to say that the temporary data generated as garbage by a program is not entangled with the program’s answer, making it safe to throw away.

While the nascent field of quantum computing can feel flashy and futuristic, quantum computers have the potential for computational breakthroughs in classically unsolvable tasks, like cryptographic and communication protocols, search, and computational physics and chemistry. Credit: Graham Carlow/IBM

While the nascent field can feel a little flashy and futuristic, with images of mammoth wiry gold machines coming to mind, quantum computers have potential for computational breakthroughs in classically unsolvable tasks, like cryptographic and communication protocols, search, and computational physics and chemistry. One of the key challenges in computational sciences is dealing with the complexity of the problem and the amount of computation needed. Whereas a classical digital computer would need a very large exponential number of bits to be able to process such a simulation, a quantum computer could do it, potentially, using a very small number of qubits — if the right programs are there.

“Our language Twist allows a developer to write safer quantum programs by explicitly stating when a qubit must not be entangled with another,” says Charles Yuan, an MIT PhD student in electrical engineering and computer science and the lead author on a new paper about Twist. “Because understanding quantum programs requires understanding entanglement, we hope that Twist paves the way to languages that make the unique challenges of quantum computing more accessible to programmers.”

Yuan wrote the paper alongside Chris McNally, a PhD student in electrical engineering and computer science who is affiliated with the MIT Research Laboratory of Electronics, as well as MIT Assistant Professor Michael Carbin. They presented the research at last week’s 2022 Symposium on Principles of Programming conference in Philadelphia.

Untangling quantum entanglement

Imagine a wooden box that has a thousand cables protruding out from one side. You can pull any cable all the way out of the box, or push it all the way in.

After you do this for a while, the cables form a pattern of bits — zeros and ones — depending on whether they’re in or out. This box represents the memory of a classical computer. A program for this computer is a sequence of instructions for when and how to pull on the cables.

Now imagine a second, identical-looking box. This time, you tug on a cable, and see that as it emerges, a couple of other cables are pulled back inside. Clearly, inside the box, these cables are somehow entangled with each other.

The second box is an analogy for a quantum computer, and understanding the meaning of a quantum program requires understanding the entanglement present in its data. But detecting entanglement is not straightforward. You can’t see into the wooden box, so the best you can do is try pulling on cables and carefully reason about which are entangled. In the same way, quantum programmers today have to reason about entanglement by hand. This is where the design of Twist helps massage some of those interlaced pieces.

The scientists designed Twist to be expressive enough to write out programs for well-known quantum algorithms and identify bugs in their implementations. To evaluate Twist’s design, they modified the programs to introduce some kind of bug that would be relatively subtle for a human programmer to detect, and showed that Twist could automatically identify the bugs and reject the programs.

They also measured how well the programs performed in practice in terms of runtime, which had less than 4 percent overhead over existing quantum programming techniques.

For those wary of quantum’s “seedy” reputation in its potential to break encryption systems, Yuan says it’s still not very well known to what extent quantum computers will actually be able to reach their performance promises in practice. “There’s a lot of research that’s going on in post-quantum cryptography, which exists because even quantum computing is not all-powerful. So far, there’s a very specific set of applications in which people have developed algorithms and techniques where a quantum computer can outperform classical computers.”

An important next step is using Twist to create higher-level quantum programming languages. Most quantum programming languages today still resemble assembly language, stringing together low-level operations, without mindfulness towards things like data types and functions, and what’s typical in classical software engineering.

“Quantum computers are error-prone and difficult to program. By introducing and reasoning about the ‘purity’ of program code, Twist takes a big step towards making quantum programming easier by guaranteeing that the quantum bits in a pure piece of code cannot be altered by bits not in that code,” says Fred Chong, the Seymour Goodman Professor of Computer Science at the University of Chicago and chief scientist at Super.tech.

Reference: “Twist: Sound Reasoning for Purity and Entanglement in Quantum Programs” by Charles Yuan, Christopher McNally and Michael Carbin.
POPL 2022

The work was supported, in part, by the MIT-IBM Watson AI Lab, the National Science Foundation, and the Office of Naval Research.

We recommend

  1. How Quantum Computers Could Usher In a Golden Age of Computing PowerMike ONeill, SciTechDaily, 2021
  2. Experiments Show Quantum Computers Can Be Better Than the Sum of Their PartsMike ONeill, SciTechDaily, 2021
  3. Stanford’s Simple New Quantum Computer Design: Photonic Computation in a Synthetic Time DimensionStanford University, SciTechDaily, 2021
  4. Amazon and Caltech Partner to Create New Quantum Computing HubMike O’Neill, SciTechDaily, 2021
  5. Harvard Scientists Observe Quantum Spin Liquids – A Never-Before-Seen State of MatterMike ONeill, SciTechDaily, 2021
  1. A Quantum ConversationNeil Gershenfeld, Science, 2001
  2. Nonlocal implementation of multi-qubit controlled unitary quantum gates with quantum channelEngin Şahin et al., World Scientific Book, 2021
  3. Chronic Fatigue Syndrome Linked to Low T3 SyndromeMiriam E Tucker, Medscape
  4. No room for errorAdrian Cho, Science, 2020
  5. Quantum Computing Quickly Scores Second Claim of SupremacyChris Palmer, Engineering, 2021

https://www.reuters.com/business/energy/researchers-achieve-milestone-path-toward-nuclear-fusion-energy-2022-01-26/


Researchers achieve milestone on path toward nuclear fusion energy

https://f0d5cfe614368eaa18bae2e1100341cd.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.htmlBy&nbsp;Will Dunham

4 minute readhttps://imasdk.googleapis.com/js/core/bridge3.496.0_en.html#goog_10124675021 second of 6 secondsVolume 0% Scientists hit a milestone toward nuclear fusion energy

Register now for FREE unlimited access to Reuters.com

Register

WASHINGTON, Jan 26 (Reuters) – U.S. government scientists said on Wednesday they have taken an important step in the long trek toward making nuclear fusion – the very process that powers stars – a viable energy source for humankind.

Using the world’s largest laser, the researchers coaxed fusion fuel for the first time to heat itself beyond the heat they zapped into it, achieving a phenomenon called a burning plasma that marked a stride toward self-sustaining fusion energy.Report ad

The energy produced was modest – about the equivalent of nine nine-volt batteries of the kind that power smoke detectors and other small devices. But the experiments at a Lawrence Livermore National Laboratory facility in California represented a milestone in the decades-long quest to harness fusion energy, even as the researchers cautioned that years of more work are needed.

The experiments produced the self-heating of matter in a plasma state through nuclear fusion, which is the combining of atomic nuclei to release energy. Plasma is one of the various states of matter, alongside solid, liquid and gas.Report ad

“If you want to make a camp fire, you want to get the fire to hot enough that the wood can keep itself burning,” said Alex Zylstra, an experimental physicist at Lawrence Livermore National Laboratory – part of the U.S. Energy Department – and lead author of the research published in the journal Nature.

“This is a good analogy for a burning plasma, where the fusion is now starting to become self-sustaining,” Zylstra said.Report ad

The scientists directed 192 laser beams toward a small target containing a capsule less than a tenth of an inch (about 2 mm) in diameter filled with fusion fuel consisting of a plasma of deuterium and tritium – two isotopes, or forms, of hydrogen.

At very high temperatures, the nucleus of the deuterium and the nucleus of the tritium fuse, a neutron and a positively charged particle called an “alpha particle” – consisting of two protons and two neutrons – emerge, and energy is released.

The Target Bay of the National Ignition Facility at the Lawrence Livermore National Laboratory in Livermore, California, U.S., is seen in an undated handout image. NIF's 192 laser beams converge at the center of this giant sphere to make a tiny hydrogen fuel pellet implode.  Damien Jemison/Handout via REUTERS
The Target Chamber of the National Ignition Facility is seen at the Lawrence Livermore National Laboratory in Livermore, California, U.S., in an undated handout image. A service system lift allows technicians to access the Target Chamber's interior for inspection and maintenance.  Lawrence Livermore National Laboratory/Handout via REUTERS
The National Ignition Facility at the Lawrence Livermore National Laboratory is seen from the rooftop of a building across the street, in Livermore, California, U.S., in an undated handout image. Nuclear fusion research is conducted at this facility. Lawrence Livermore National Laboratory/Handout via REUTERS

1/3

The Target Chamber of the National Ignition Facility is seen at the Lawrence Livermore National Laboratory in Livermore, California, U.S., in an undated handout image. A service system lift allows technicians to access the Target Chamber’s interior for inspection and maintenance. Lawrence Livermore National Laboratory/Handout via REUTERSRead More

“Fusion requires that we get the fuel incredibly hot in order for it to burn – like a regular fire, but for fusion we need about a hundred million degrees (Fahrenheit). For decades we’ve been able to cause fusion reactions to occur in experiments by putting a lot of heating into the fuel, but this isn’t good enough to produce net energy from fusion,” Zylstra said.

“Now, for the first time, fusion reactions occurring in the fuel provided most of the heating – so fusion is starting to dominate over the heating we did. This is a new regime called a burning plasma,” Zylstra said.

Unlike burning fossil fuels or the fission process of existing nuclear power plants, fusion offers the prospect of abundant energy without pollution, radioactive waste or greenhouse gases. Nuclear fission energy comes from splitting atoms. Fusion energy comes from fusing atoms together, just like inside stars, including our sun.

Private-sector ventures – dozens of companies and institutions – also are pursuing a fusion energy future, with some oil companies even investing.

“Fusion energy is the holy grail of clean limitless energy,” said Annie Kritcher of Lawrence Livermore National Laboratory, lead designer for the experiments conducted in 2020 and 2021 at the National Ignition Facility and first author of a companion paper published in the journal Nature Physics.

In these experiments, fusion produced about 10 times as much energy as went into heating the fuel, but less than 10% of the total amount of laser energy because the process remains inefficient, Zylstra said. The laser was used for only about 10 billionths of a second in each experiment, with fusion production lasting 100 trillionths of a second, Kritcher added.

Zylstra said he is encouraged by the progress.

“Making fusion a reality is an enormously complex technological challenge, and it will require serious investment and innovation to make it practical and economical,” Zylstra said. “I view fusion as a decadal-scale challenge for it to be a viable source of energy.”

https://www.salon.com/2022/01/30/elon-musk-doesnt-need-to-get-inside-your-brain-big-tech-is-already-there/

Elon Musk doesn’t need to get inside your brain. Big Tech is already there

The hype around Neuralink doesn’t tell the whole story of brain implants, nor of the tech industry

By MARY ELIZABETH WILLIAMS

PUBLISHED JANUARY 30, 2022 10:00AM (EST)

Is Elon Musk actually trying to hijack your mind?

It might sound weird, but it’s an understandable fear. This week, the billionaire entrepreneur and amateur sketch comedian made big news when his Neuralink Corporation posted a listing for a clinical trial director. Among the preferred qualifications for candidates for the high-level position? Experience with “Class III implantable neuromodulation devices.” And once a clinical trial director in place, it can be assumed that clinical trials will soon follow.

Neuralink launched in 2017 with the promise, as the Wall Street Journal reported then, “to connect brains with computers.” With Elon Musk as one of the company’s founders, the reporting was from the beginning buzzy — and the subtext not-so-vaguely ominous.Advertisement:

Musk, after all, has been public in his concerns about the “existential” threat of AI, and his fears that “If you assume any rate of advancement in [artificial intelligence], we will be left behind by a lot.” It would stand to reason, then, that the entrepreneur would be curious about outsmarting our own creations. “Elon Musk’s Neuralink wants to plug into your brain,” USA Today announced at the time. “Time to screen ‘The Matrix,’ people.”

Now, with Neuralink apparently ramping up its endeavors, the concerns and the hyperbole are back. “The dystopian television show Black Mirror’ has begun to feel less like fantasy as Elon Musk’s brain-implant startup, Neuralink, gears up for human trials,” wrote The Daily Beast this week. As a TechRadar headline put it: “Elon Musk’s Neuralink is one step closer to putting an implant in our brains,” implying that implants are all but inevitable for everyone. “In ten years’ time,” the author writes, “we could all be walking around with a computer in our heads.”Advertisement:

Background Image

https://imasdk.googleapis.com/js/core/bridge3.496.0_en.html#goog_1107596504Maggie Q on performing her own stunts: “I don’t live with a ton of fear”Progress Bar00:24 / 03:09

Volume Bar

Far be for me to undersell the unpleasantness of Elon Musk, or the appropriate unease that any Musk-related news might inspire. Some context here, however, is helpful. When TechRadar asks, “Do you want to let Elon Musk put an implant in your brain?” it creates a mental picture of the Tesla titan himself physically opening up all of our skulls, one by one, as we move through the factory conveyer belt on the way to be turned into Soylent Green. I just don’t think Musk has that kind of time.

Even more significantly, we need to understand that the potential of brain-computer interfaces (or brain machine interfaces) is still mostly just potential. Writing for MIT’s Technology Review in 2020, Antonio Regalado called Neuralink “neuroscience theater,” saying that while the company touts the dream of being able to “see radar with superhuman vision, discover the nature of consciousness…. None of these advances are close at hand, and some are unlikely to ever come about.”Advertisement:https://bf3b19ded4858d367fed24c00f15558e.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

While Musk can enthuse about creating a “Fitbit in your skull,” the immediate and practical applications of BCI are for conditions that inhibit movement and make communication difficult, like Parkinson’s and ALS. There already is exciting clinical work being done right now by scientists and medical researchers out of Massachusetts General Hospital, Stanford and other reputable institutions. Even Neuralink aims first “to help people with paralysis to regain independence through the control of computers and mobile devices,” so calm down.

That’s not to say that there’s no need to be concerned about where all of this is heading, and who’s interested in the technology. As Emily Willingham writes in her fantastic and timely “The Tailored Brain,” the US Department of Defense has been spearheading neurotechnology research for several years. The applications for improved physical movement and mental health are intriguing.Advertisement:https://bf3b19ded4858d367fed24c00f15558e.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

“Some of it is extremely useful and legitimate and necessary,” Willingham told Salon earlier this year. And some of it, because this is the Defense Advanced Research Projects Agency we’re talking about, “starts to get into creepy territory.” If you’re worried about Grimes’ ex-boyfriend — whose corporate culture sounds a lot less shimmeringly brilliant than his hype may indicate — you’re really not going to love what Uncle Sam could do with this stuff.

The good news here is our brains are not the only game in town. We experience our memories and our consciousness throughout our bodies, and we are only beginning to scratch the surface of our understanding of the implications there. More than that, though, we are not simply computers, despite all the reductive Silicon Valley-speak in the world. “Our brain is not a slot machine,” says Emily Willingham. “You can’t pop a quarter in, pull a lever, and then just hope for the best. There are so many pathways in our brains that act together. If you affect one, you’re going to be influencing the other. There’s not some direct target where you just hit the bullseye and that’s it.” Our brains are not as easily hacked as scary headlines might initially imply, at least not yet.Advertisement:https://bf3b19ded4858d367fed24c00f15558e.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

But Willingham also observes in “The Tailored Brain” that we might want to consider how much of our movements and habits and desires already belong to the hive. We just happen to carry a big collection of our thoughts and ideas around on the outside. Our behavior is easy to track and our opinions are not at all difficult to tweak. Earlier this week, I found my way to a bar because the small device I held in my hand detected my location from outer space and told me, in the lilting Irish accent I had chosen for it, exactly how to get there and how long the journey would take. Then I came home and watched a video from a fashion designer on that same device, and now I can’t stop seeing ads for sweaters. Or Houlihan’s. That’s just a few moments of a very typical day for a whole lot of us. You don’t want your mind being manipulated? The horses left that barn a long time ago, when we traded our privacy for GPS and Google. The Elon Musks of the world don’t have to plant a chip in your head. They’re already in there.

MARY ELIZABETH WILLIAMS

Mary Elizabeth Williams is a senior writer for Salon and author of “A Series of Catastrophes & Miracles.”MORE FROM MARY ELIZABETH WILLIAMS

https://toronto.citynews.ca/2022/01/30/housework-or-sleep-study-says-it-depends-when-you-were-born-2/

Housework or sleep? Study says it depends when you were born

Group of Gen Xers are shown on a staircase. UNSPLASH/Maria Teneva
Group of Gen Xers are shown on a staircase. UNSPLASH/Maria Teneva

By Mike Schneider, The Associated Press

Posted Jan 30, 2022, 10:57AM EST.

When Gen Xer Amy Rottier went shopping for her young children two decades ago, she drove to a mall and browsed for what she needed. Her millennial daughter, Helen, who is studying for a doctorate and doesn’t have children, buys anything she needs with a click on her iPad.

The women, ages 50 and 25, respectively, illustrate the pace of change from one generation to the next in what people do in an average day. The changes were revealed in a study released last week by the U.S. Bureau of Labor Statistics.

Generation X women were more likely to do housework, care for children, read for pleasure and do lawn work, the study found. Millennial women were more inclined to exercise, spend leisure time on computers, take care of their pets and sleep.

The report uses American Time Use Survey data to capture how people lived at a point in time between the ages of 23 and 38. For Amy Rottier’s generation, that was in 2003. For her daughter Helen, it was in 2019 – a year before the global coronavirus pandemic dramatically altered patterns of living. The report reflects changes for men as well as women.

Both generations spent the same amount of time working, and men worked longer hours than women because women were more likely to work part time. The two generations spent about the same time on leisure and sports activities, but Gen Xers were more likely than millennials to have children and own homes.

“I don’t know if I will ever have a house with a lawn. …Why would I need to take care of a lawn?”

Even though viewing television was the top leisure activity for both generations, millennial men spent 18 minutes a day less watching TV than their Gen X counterparts. They appear to have shifted that time into playing games. On an average day, more millennials were participating in sports, recreation and exercise than their Gen X peers.

Changes in technology weighed heavily in people’s choices, according to the report. Social media was in its infancy in 2003, smartphones weren’t widespread and Cyber Monday hadn’t yet been invented by retail marketing gurus.

“Millennials have an advantage in that they were able to do a lot of things from the comfort of their home, without getting in their car and going to a store or bank. It saves on time. For Generation X, that wasn’t available when they were their age,” said Michelle Freeman, the senior economist at the Bureau of Labor Statistics who wrote the report. “You can’t ignore the technological improvements from 2003 to 2019, and that is definitely a factor.”

Decisions about having children figured in, too.

“Taking care of kids, that was what I was doing the majority of my free time,” said Amy Rottier, who has five children with her husband, Eric, in Madison, Wisconsin. “For me, at that point, leisure time was my husband telling me to take a bath and he would wrangle the kids and put them to bed.”

As someone in her mid-20s now, Helen Rottier, who lives in Chicago, said the idea of having children is a distant proposition.

“I’m still working on my degree, and then I want to get settled into my career,” she said. “With my friends, we are now at the same age our parents were when we were born, and we aren’t thinking of having kids yet.”

Millennials were more likely to delay having families compared with members of Generation X, who were born between 1965 and 1980. Millennials, born between 1981 and 1996, were more likely to have advanced degrees and less likely to be married than Gen Xers.

Gen Xers spent more time shopping for goods, which is likely because the act of physically going to a brick-and-mortar store took more time than shopping online. Millennial women spent less time per day reading for pleasure compared with Generation X women. Freeman said reading has declined for all age groups in the past two decades, going from 22 minutes a day on average in 2003 to 16 minutes a day in 2019.

Millennials also slept 22 minutes per day longer than their Gen X counterparts, which Freeman said may reflect shifting attitudes about the importance of sleep.

“My parents are baby boomers and they worked a lot,” she said. “Sleeping a lot was considered lazy. We now respect the fact that more sleep is good for our health.”

Without having children like their Gen X peers, millennials spent nearly twice as much time doing animal and pet care activities on a given day than Gen Xers did in 2003, according to the report. Then there’s the difference on time spent gardening or keeping up a yard, which millennials spent about a half hour a day doing less, primarily because they were less likely to own a home.

“I don’t know if I will ever have a house with a lawn,” Helen Rottier said. “It may be different in the future, but right now, I don’t see any appeal in a lawn. Why would I need to take care of a lawn?”

https://neurosciencenews.com/breathing-sleep-19984/

Breathing: The Master Clock of the Sleeping Brain

FeaturedNeuroscienceOpen Neuroscience Articles·January 29, 2022

Summary: During sleep, breathing entrains and coordinates neural activity across the limbic system, and enhances memory consolidation.

Source: LMU

While we sleep, the brain is not switched off, but is busy with “saving” the important memories of the day. To achieve that, brain regions are synchronized to coordinate the transmission of information between them. Yet, the mechanisms that enable this synchronization across multiple remote brain regions are not well understood.

Traditionally, these mechanisms were sought in correlated activity patterns within the brain. However, LMU neuroscientists Prof. Anton Sirota and Dr. Nikolas Karalis have now been able to show that breathing acts as a pacemaker that entrains the various brain regions and synchronizes them with each other.

Breathing is the most persistent and essential bodily rhythm and exerts a strong physiological effect on the autonomous nervous system. It is also known to modulate a wide range of cognitive functions such as perception, attention, and thought structure. However, the mechanisms of its impact on cognitive function and the brain are largely unknown.

The scientists performed large-scale in vivo electrophysiological recordings in mice, from thousands of neurons across the limbic system. They showed that respiration entrains and coordinates neuronal activity in all investigated brain regions – including the hippocampus, medial prefrontal and visual cortex, thalamus, amygdala, and nucleus accumbens – by modulating the excitability of these circuits in olfaction-independent way.

This shows a sleeping little girl
While we sleep, the brain is not switched off, but is busy with “saving” the important memories of the day. Image is in the public domain

“Thus, we were able to prove the existence of a novel non-olfactory, intracerebral, mechanism that accounts for the entrainment of distributed circuits by breathing, which we termed “respiratory corollary discharge”, says Karalis, who is currently research fellow at the Friedrich Miescher Institute for Biomedical Research in Basel.

“Our findings identify the existence of a previously unknown link between respiratory and limbic circuits and are a departure from the standard belief that breathing modulates brain activity via the nose-olfactory route”, underlines Sirota.

This mechanism mediates the coordination of sleep-related activity in these brain regions, which is essential for memory consolidation and provides the means for the co-modulation of the cortico-hippocampal circuits synchronous dynamics. According to the authors, these results represent a major step forward and provide the foundation for new mechanistic theories, that incorporate the respiratory rhythm as a fundamental mechanism underlying the communication of distributed systems during memory consolidation.

About this sleep research news

Author: Constanze Drewlo
Source: LMU
Contact: Constanze Drewlo – LMU
Image: The image is in the public domain

Original Research: Open access.
Breathing coordinates cortico-hippocampal dynamics in mice during offline states” by Anton Sirota et al. Nature Communications


AbstractSee also

This shows a woman sitting alone in a corner

FeaturedPsychologySeptember 25, 2021

Lack of Trust Exacerbates Loneliness Spiral

Breathing coordinates cortico-hippocampal dynamics in mice during offline states

Network dynamics have been proposed as a mechanistic substrate for the information transfer across cortical and hippocampal circuits. However, little is known about the mechanisms that synchronize and coordinate these processes across widespread brain regions during offline states.

Here we address the hypothesis that breathing acts as an oscillatory pacemaker, persistently coupling distributed brain circuit dynamics. Using large-scale recordings from a number of cortical and subcortical brain regions in behaving mice, we uncover the presence of an intracerebral respiratory corollary discharge, that modulates neural activity across these circuits.

During offline states, the respiratory modulation underlies the coupling of hippocampal sharp-wave ripples and cortical DOWN/UP state transitions, which mediates systems memory consolidation.

These results highlight breathing, a perennial brain rhythm, as an oscillatory scaffold for the functional coordination of the limbic circuit that supports the segregation and integration of information flow across neuronal networks during offline states.

https://fortune.com/2022/01/29/neuralink-elon-musk-brain-implant-startup-high-pressure-workplace/

Neuralink former employees say Elon Musk applies relentless pressure and instills a culture of blame

BY JEREMY KAHN AND JONATHAN VANIAN

January 29, 2022 7:00 AM PST

Elon Musk has always said that Neuralink, the company he created in 2016 to build brain-computer interfaces, would do amazing things: Eventually, he says, it aims to allow humans to interact seamlessly with advanced artificial intelligence through thought alone. Along the way, it would help to cure people with spinal cord injuries and brain disorders ranging from Parkinson’s to schizophrenia.

Now the company is approaching a key test: a human clinical trial of its brain-computer interface (BCI). In December, Musk told a conference audience that “we hope to have this in our first humans” in 2022. In January, the company posted a job listing for a clinical trial director, an indication that it may be on track to meet Musk’s suggested timeline.

But even as it approaches this milestone, the company has been plagued by internal turmoil, including the loss of key members of the company’s founding team, according to a half-dozen former employees interviewed by Fortune—in no small part because of the pressure-cooker culture Musk has created.

Most of these former employees requested anonymity, concerned about violating nondisclosure agreements and the possibility of drawing Musk’s ire. Musk and Neuralink did not respond to multiple requests for comment for this story.

Musk has put the startup under unrelenting pressure to meet unrealistic timelines, these former employees say. “There was this top-down dissatisfaction with the pace of progress even though we were moving at unprecedented speeds,” says one former member of Neuralink’s technical staff, who worked at the company in 2019. “Still Elon was not satisfied.” Multiple staffers say company policy, dictated by Musk, forbade employees from faulting outside suppliers or vendors for a delay; the person who managed that relationship had to take responsibility for missed deadlines, even those outside their control.

Employees were constantly anxious about angering Musk by not meeting his ambitious schedules, former employees said. “Everyone in that whole empire is just driven by fear,” another former employee says, referring to Musk’s businesses, including Neuralink. This culture of blame and fear, former employees said, contributed to a high rate of turnover. Of the eight scientists Musk brought in to help establish Neuralink with him, only two, Dongjin Seo and Paul Merolla, remain at the company.

Read the Fortune feature. Inside Neuralink, Elon Musk’s secretive startup: A culture of blame, impossible deadlines, and a missing CEO.

The pressure could be particularly problematic because of the multiple tough scientific and engineering challenges Neuralink was tackling. Tim Hanson, a scientist at the Janelia Research Campus that is part of the Howard Hughes Medical Institute in Ashburn, Va., was part of Neuralink’s founding team, working on its surgical robot as well as animal studies using brain-computer interfaces. “There’s this mismatch,” he says, between the speed at which engineering obstacles can be solved and the more deliberative pace of fundamental science. “Basic science is basically slow,” says Hanson, who left the company in 2018. 

Engineers sometimes had to make decisions about issues such as electrode design before relevant data was available from scientific teams working on animal research. Animal research can take months and years; the engineers were under pressure to act in days and weeks. There were also delays caused by Neuralink’s need to fabricate custom-designed computer chips, one former employee said. Musk, meanwhile, wanted to move into human implantation as fast as possible.

Tensions of this sort aren’t atypical for startups working at the cutting edge of a new technology, and similar pressure from Musk has been chronicled at his other companies, including SpaceX and Tesla. But the issues certainly add to doubts about whether Neuralink will be able to live up to the hype Musk’s has created with his breezy pronouncements about what its technology will be able to do. For more on Neuralink, and what it has and hasn’t managed to achieve, read Fortune’s full story here

https://neurosciencenews.com/visual-stability-19985/

Everything We See Is a Mash-up of the Brain’s Last 15 Seconds of Visual Information

FeaturedNeuroscienceVisual Neuroscience·January 29, 2022

Summary: Researchers reveal how the brain creates an illusion of visual stability.

Source: The Conversation

Our eyes are continuously bombarded by an enormous amount of visual information – millions of shapes, colours and ever-changing motion all around us. For the brain, this is no easy feat.

On the one hand, the visual world alters continuously because of changes in light, viewpoint and other factors. On the other, our visual input constantly changes due to blinking and the fact that our eyes, head and body are frequently in motion.

To get an idea of the “noisiness” of this visual input, place a phone in front of your eyes and record a live video while you are walking around and looking at different things. The jittery, messy result is exactly what your brain deals with in every moment of your visual experience.

This can be seen also in the video below. The white circle on the right shows potential eye movements, and the blurry blob on the left reveals the jumpy visual input in every moment.https://www.youtube.com/embed/Lub3lsJdko0?feature=oembedCredit: Sebastiaan Mathôthttps://404afa476d173a96287024a5fe2fef23.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

Yet, seeing never feels like work for us. Rather than perceiving the fluctuations and visual noise that a video might record, we perceive a consistently stable environment. So how does our brain create this illusion of stability? This process has fascinated scientists for centuries and it is one of the fundamental questions in vision science.

The time machine brain

In our latest research, we discovered a new mechanism that, among others, can explain this illusory stability. The brain automatically smoothes our visual input over time. Instead of analysing every single visual snapshot, we perceive in a given moment an average of what we saw in the past 15 seconds. So, by pulling together objects to appear more similar to each other, our brain tricks us into perceiving a stable environment. Living “in the past” can explain why we do not notice subtle changes that occur over time.

In other words, the brain is like a time machine which keeps sending us back in time. It’s like an app that consolidates our visual input every 15 seconds into one impression so that we can handle everyday life. If our brains were always updating in real time, the world would feel like a chaotic place with constant fluctuations in light, shadow and movement. We would feel like we were hallucinating all the time.

We created an illusion to illustrate how this stabilisation mechanism works. Looking at the video below, the face on the left side slowly ages for 30 seconds, and yet, it is very difficult to notice the full extent of the change in age. In fact, observers perceive the face as ageing more slowly than it actually is.

To test this illusion we recruited hundreds of participants and asked them to view close-ups of faces morphing chronologically in age in 30-second timelapse videos. When asked to tell the age of the face at the very end of the video, the participants almost consistently reported the age of the face that was presented 15 seconds before.https://www.youtube.com/embed/cLqVwvdOzuk?feature=oembedCredit: Mauro Manassi

As we watch the video, we are continuously biased towards the past and so the brain constantly sends us back to the previous ten to 15 seconds (where the face was younger). Instead of seeing the latest image in real time, humans actually see earlier versions because our brain’s refresh time is about 15 seconds. So this illusion demonstrates that visual smoothing over time can help stabilise perception.

What the brain is essentially doing is procrastinating. It’s too much work to constantly deal with every single snapshot it receives, so the brain sticks to the past because the past is a good predictor of the present. Basically we recycle information from the past because it’s more efficient, faster and less work.https://404afa476d173a96287024a5fe2fef23.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

This idea – which is also supported by other results – of mechanisms within the brain that continuously bias our visual perception towards our past visual experience is known as continuity fields. Our visual system sometimes sacrifices accuracy for the sake of a smooth visual experience of the world around us. This can explain why, for example, when watching a film we don’t notice subtle changes that occur over time, such as the difference between actors and their stunt doubles.

Repercussions

There are positive and negative implications to our brain operating with this slight lag when processing our visual world. The delay is great for preventing us from feeling bombarded by visual input every day, but it can also risk life-or-death consequences when absolute precision is needed.

This shows a woman's head
To get an idea of the “noisiness” of this visual input, place a phone in front of your eyes and record a live video while you are walking around and looking at different things. Image is in the public domain

For example, radiologists examine hundreds of images in batches, seeing several related images one after the other. When looking at an X-ray, clinicians are typically asked to identify any abnormalities and then classify them. During this visual search and recognition task, researchers have found that radiologists’ decisions were based not only on the present image, but also on images they had previously seen, which could have grave consequences for patients.See also

This shows sheet music on a piano

FeaturedNeurologyNeuroscienceNovember 9, 2021

Listening to Favorite Music Improves Brain Plasticity and Cognitive Performance in Alzheimer’s Patients

Our visual system’s sluggishness to update can make us blind to immediate changes because it grabs on to our first impression and pulls us toward the past. Ultimately, though, continuity fields promote our experience of a stable world. At the same time, it’s important to remember that the judgements we make every day are not totally based on the present, but strongly depend on what we have seen in the past.

Funding:

Mauro Manassi receives funding from the Swiss National Science Foundation fellowship and Carnegie Trust for the Universities of Scotland.

David Whitney receives funding from the National Institutes of Health (US).

About this visual neuroscience research news

Author: Mauro Manassi and David Whitney
Source: The Conversation
Contact: Mauro Manassi and David Whitney – The Conversation

https://www.thestar.com/news/canada/2022/01/27/squirrel-guts-might-hold-the-secret-to-making-space-travel-more-practical.html


Squirrel guts might hold the secret to making space travel more practical

Astronauts doing long stints in space must do a lot of exercising to keep their muscle mass from dwindling away. But hibernating animals can get the chemicals they need from their own urine, a Canadian researcher has found.

SMBy Steve McKinleyHalifax BureauThu., Jan. 27, 2022timer4 min. readupdateArticle was updated 1 day agoJOIN THE CONVERSATION ( 5 )

A squirrel’s ability to repurpose its own pee to help it build muscle during hibernation might have important implications for the future of space travel, theorizes one Montreal researcher.

In a research paper published Thursday in Science, Matthew Regan, associate professor of animal physiology at the Université de Montréal, confirmed a theory that’s been bouncing around since the ’80s, one that hypothesizes that some animals are able to break down their urea — usually excreted as urine — and use the nitrogen extracted to build other proteins, and from there to build muscle tissue.

That “urea nitrogen salvage” theory was first advanced as an answer to the tricky question of how hibernating species — such as bears and squirrels — manage to last the winter on the fat reserves they’ve stored up in their bodies without significantly reducing the mass of their muscles.

“All hibernating mammals seem to have this amazing ability to preserve their muscle tissue structure and function over time,” says Regan. “For some species, that is nine months of inactivity and fasting. For us, even a week or two of bed rest or a week or two in the microgravity environment of space would be enough to see significant muscle atrophy or muscle shrinking.”

It’s why astronauts aboard the International Space Station spend so much of their time exercising — trying to stave off the inevitable muscle atrophy that occurs when spending any significant time in space.

But hibernating animals seem to be able to wake up and shake off a hibernation and go about their business as usual. The 13-lined ground squirrel — the subject of Regan’s research — emerges from hibernation to jump right into its mating season.

How they do that has been a mystery to researchers for years.

The urea nitrogen salvage theory hypothesizes that the hibernating animal has microbes in its gut that let it break down the urea into its component parts, which include nitrogen and carbon. It can then use that nitrogen to build more tissue protein, and thus more muscle.

This is the theory that Regan’s research with ground squirrels — “a little more tractable to work with than bears” — confirmed.

And it’s possible, he says, that those kind of gut microbes might be adaptable to humans.

“The applications this has to space flight or to humans in bed-rest conditions is the fact that under those both of those conditions, there is the chance for significant muscle shrinking or muscle atrophy,” he says.

“This is similar to what hibernators should theoretically experience during their months of inactivity, especially because they have deprived themselves of a dietary source of nitrogen, which is an important critical building block in protein.”

But they don’t, says Regan, so they must be doing something that we’re not doing.

To find out what that was, Regan first looked at ruminants — animals like cows and deer — which are known to have a similar process to recycle nitrogen. From that, his team hypothesized a similar pathway in their squirrels and then designed experiments to look for it.

To do that, the squirrels were injected with urea that was “labelled” using isotopes of carbon and nitrogen that are relatively rare in nature, and which the experimenters could detect.

Because of the labelled urea, researchers were able to track the squirrels’ gut microbes breaking it down. They found high levels of the labelled carbon and nitrogen isotopes in the muscles and livers of their subjects, indicating that the urea was indeed being broken down and recycled to make other tissues.

And if squirrels can do it, there’s the possibility that humans might be able to as well.

However, gut microbiomes — the collection of microbes in the gut — are highly complex environments, so while Regan’s team has an idea of which bacteria group is responsible for the squirrels’ breakdown of urea, transferring it to humans is not as simple as merely guzzling a probiotic smoothie.

“The first thing is to understand exactly how they’re doing it at all those different steps. And then at that point, then we can have a better idea of how this process might be adapted to humans,” he says.

He adds that research conducted in the ’90s has shown that, under certain conditions, humans are capable of recycling the nitrogen in our urea in what appears to be the same way — though at a lesser extent than his ground squirrels.

So, it may be possible that the necessary mechanism for recycling urine for its nitrogen content already exists in the human gut, and it merely needs to be boosted.

And that has widespread implications for humans. While Regan is a self-acknowledged huge fan of space travel, he suspects that the future possible benefits of his research will first be seen on the ground.

“There are literally hundreds of millions of people in the world who are undernourished, and a hallmark of undernourishment is protein deprivation,” he says. “As we age all humans to some extent experience sarcopenia, which is a gradual reduction in muscle mass after the age of 40.

“So, if this mechanism is in fact safely translatable to humans, in theory, there are many, many examples of potential benefits to human health. And I suspect that if this is ever translated to humans, it will first be translated for earthbound reasons.”