MIT Dream Research Interacts Directly With an Individual’s Dreaming Brain and Manipulates the Content



“Dormio takes dream research to a new level, interacting directly with an individual’s dreaming brain and manipulating the actual content of their dreams,” says Robert Stickgold, director of the Center for Sleep and Cognition at Beth Israel Deaconess Medical Center. Credit: Helen Gao

Device not only helps record dream reports, but also guides dreams toward particular themes.

The study of dreams has entered the modern era in exciting ways, and researchers from MIT and other institutions have created a community dedicated to advancing the field, lending it legitimacy and expanding further research opportunities.    

In a new paper, researchers from the Media Lab’s Fluid Interfaces group introduce a novel method called “Targeted Dream Incubation” (TDI). This protocol, implemented through an app in conjunction with a wearable sleep-tracking sensor device, not only helps record dream reports, but also guides dreams toward particular themes by repeating targeted information at sleep onset, thereby enabling incorporation of this information into dream content. The TDI method and accompanying technology serve as tools for controlled experimentation in dream study, widening avenues for research into how dreams impact emotion, creativity, memory, and beyond.

The paper, “Dormio: A Targeted Dream Incubation Device,” is co-authored by lead researcher Adam Haar Horowitz and professor of media arts and sciences Pattie Maes, who is also head of the Fluid Interfaces group. Additional authors on the paper are Tony J. Cunningham, postdoc at Beth Israel Deaconess Medical Center and Harvard Medical School, and Robert Stickgold, director of the Center for Sleep and Cognition at Beth Israel Deaconess Medical Center and professor of psychiatry at Harvard Medical School.

Previous neuroscience studies from researchers such as sleep and cognitive sciences expert Stickgold show that hypnagogia (the earliest sleep stage) is similar to the REM stage in terms of brainwaves and experience; however, unlike REM, individuals can still hear audio during hypnagogia while they dream. 

“This state of mind is trippy, loose, flexible, and divergent,” explains Haar Horowitz. “It’s like turning the notch up high on mind-wandering and making it immersive — being pushed and pulled with new sensations like your body floating and falling, with your thoughts quickly snapping in and out of control.”

To facilitate the TDI protocol, an interdisciplinary team at the Media Lab designed and developed Dormio, a sleep-tracking device that can alter dreams by tracking hypnagogia and then delivering audio cues based on incoming physiological data, at precise times in the sleep cycle, to make dream direction possible. Upon awakening, a person’s guided dream content can be used to complete tasks such as creative story writing, and compared experimentally to waking thought content.

“Dormio takes dream research to a new level, interacting directly with an individual’s dreaming brain and manipulating the actual content of their dreams,” says Stickgold. “The potential value of Dormio for enhancing learning and creativity are literally mind-blowing.”

The Media Lab team’s first pilot study using Dormio demonstrated dream incubation and creativity augmentation in six people, and was presented alt.CHI in 2018. Multiple scientists began reaching out to the team expressing interest in replicating the dream-control research. These requests led to the first Dream Engineering workshop, which was held at the Media Lab in January 2019, organized by Maes, Haar Horowitz, and Judith Amores from the Fluid Interfaces group, and Michelle Carr, visiting researcher from the University of Rochester Sleep and Neurophysiology Laboratory. The workshop brought together many of the world’s leading dream researchers, including pioneers such as Deirdre Barrett, Bjorn Rasch, Ken Paller, and Stephen LaBerge, to brainstorm new technologies for studying, recording, and influencing dreams. 

The talks and technologies presented at the workshop further led to a Special Issue on Dream Engineering for the journal Consciousness and Cognition, with Maes, Haar Horowitz, Amores, and Carr serving as guest editors.

“Most sleep and dream studies have so far been limited to university sleep labs and have been very expensive, as well as cumbersome, for both researchers and participants,” says Maes. “Our research group is excited to be pioneering new, compact, and cheap technologies for studying sleep and interfacing with dreams, thereby opening up opportunities for more studies to happen and for these experiments to take place in natural settings. Apart from benefiting scientists, this work has the potential to lead to new commercial technologies that go beyond sleep tracking to issue interventions that affect sleep onset, sleep quality, sleep-based memory consolidation, and learning.”

The research itself is central to Haar Horowitz’s thesis work in the Program of Media Arts and Sciences. This past year, he ran a larger dream study with 50 subjects, which replicated and extended the results of the previous study.

“We showed that dream incubation is tied to performance benefits on three tests of creativity, by both objective and subjective metrics,” Haar Horowitz states. “Dreaming about a specific theme seems to offer benefits post-sleep, such as on creativity tasks related to this theme. This is unsurprising in light of historical figures like Mary Shelley or Salvador Dalí, who were inspired creatively by their dreams. The difference here is that we induce these creatively beneficial dreams on purpose, in a targeted manner.”

An enhanced Dormio device has now also been built, as well as an analysis platform, streaming platform, an iOS app for audio capture and streaming, and a web app for audio capture, storage, and streaming. These mobile and online platforms allow the TDI method to be shared through a variety of open source technologies.

A number of other universities have likewise begun related Dormio studies; these include Duke University, Boston College, Harvard University, the University of Rochester, and the University of Chicago.

The Media Lab research team is also leading collaborations with artists, using dreams to create new artwork and augment artistic creativity. This work, which mixes sleep science and media art, has been shown at the Beijing Biennale and Ars Electronica festival, and a new collaboration with installation artist Carsten Holler looks to create an overnight experimental art piece.

Reference: “Dormio: A targeted dream incubation device” by Adam Haar Horowitz, Tony J. Cunninghamb, Pattie Maes and Robert Stickgold, 30 May 2020, Consciousness and Cognition.
DOI: 10.1016/j.concog.2020.102938

The Dormio development team includes researchers Haar Horowitz, Tomás Vega, Ishaan Grover, Pedro Reynolds-Cuéllar, Oscar Rosello, Abhinandan Jain, and Eyal Perry, along with students in the MIT Undergraduate Research Opportunities Program Matthew Ha, Christina Chen, and Kathleen Esfahany.

Scientists Uncover Brain Mechanism That May Explain Why Sleep Helps You Learn

TOPICS:BrainNeuroscienceSleep ScienceSociety For Neuroscience

By SOCIETY FOR NEUROSCIENCE JULY 27, 2020Perineuronal Net Mouse Brain

A perineuronal net (red) in the mouse brain, surrounding a neuron expressing Arc (green), a protein involved in memory processing. The holes in the perineuronal net may represent sites of memory storage that are regulated during sleep. Credit: Pantazopoulos et al., eNeuro 2020

Changes in Brain Cartilage May Explain Why Sleep Helps You Learn

Scientists uncover a potential mechanism behind sleep-induced memory changes.

The morphing structure of the brain’s “cartilage cells” may regulate how memories change while you snooze, according to new research in eNeuro.

Sleep lets the body rest, but not the brain. During sleep, the brain accounts for a day of learning by making strong memories stronger and weak memories weaker, a process known as memory consolidation. But changing memories requires changing synapses, the connections between neurons. Sleep-induced changes need to overcome perineuronal nets, cartilage-like sheaths that not only surround and protect neurons, but also prevent changes in synapses.

Pantazopoulos et al. investigated how perineuronal nets varied during sleep in mice. By documenting whether or not they could tag the nets with a protein that binds to a specific sugar chain, they could observe the changes in synapses. A decrease in the number of tagged nets would indicate an increase in the number of neurons allowing synaptic changes.

Tagging increased during wakefulness and decreased during sleep. Sleep deprivation prevented this change. The levels of a net-altering enzyme expressed by brain immune cells cycled in opposition, hinting that it may be responsible for the change. The research team also compared levels of tagged nets in human brain tissue with the donor’s time of death. Human brains displayed similar sleep-centric rhythms in net structure. Altering the structure of perineuronal nets may be one of the mechanisms behind sleep-induced memory changes.

Reference: “Circadian Rhythms of Perineuronal Net Composition” 27 July 2020, eNeuro.
DOI: 10.1523/ENEURO.0034-19.2020

How Do Dogs Find Their Way Home? They Might Sense Earth’s Magnetic Field

Our canine companions aren’t the only animals that may be capable of magnetoreception

GPS Terrier
A terrier fitted with GPS remote tracking device and camera (Kateřina Benediktová / Czech University of Life Sciences)

JULY 27, 2020 10:57AM462332

Last week, Cleo the four-year-old yellow Labrador retriever showed up on the doorstep of the home her family moved away from two years ago, reports Caitlin O’Kane for CBS News. As it turns out, Cleo traveled nearly 60 miles from her current home in Kansas to her old one in Missouri. Cleo is just one of many dogs who have made headlines for their homing instincts; in 1924, for example, a collie known as “Bobbie the Wonder Dog” traveled 2,800 miles in the dead of winter to be reunited with his people.

Now, scientists suggest these incredible feats of navigation are possible in part due to Earth’s geomagnetic field, according to a new study published in the journal eLife.

Researchers led by biologists Kateřina Benediktová and Hynek Burda of the Czech University of Life Sciences Department of Game Management and Wildlife Biology outfitted 27 hunting dogs representing 10 different breeds with GPS collars and action cameras, and tracked them in more than 600 excursions over the course of three years, Michael Thomsen reports for Daily Mail. The dogs were driven to a location, led on-leash into a forested area, and then released to run where they pleased. The team only focused on dogs that ventured at least 200 meters away from their owners.

But the researchers were more curious about the dogs’ return journeys than their destinations. When called back to their owners, the dogs used two different methods for finding their way back from an average of 1.1 kilometers (about .7 miles) away. About 60 percent of the dogs used their noses to follow their outbound route in reverse, a strategy known as “tracking,” while the other 30 percent opted to use a new route, found through a process called “scouting.”

According to the study authors, both tactics have merits and drawbacks, and that’s why dogs probably alternate between the two depending on the situation.

“While tracking may be safe, it is lengthy,” the authors write in the study. “Scouting enables taking shortcuts and might be faster but requires navigation capability and, because of possible errors, is risky.”

Data from the scouting dogs revealed that their navigation capability is related to a magnetic connection
Data from the scouting dogs revealed that their navigation capability is related to a magnetic connection (Kateřina Benediktová / Czech University of Life Sciences)

Data from the scouting dogs revealed that their navigation capability is related to a magnetic connection. All of the dogs who did not follow their outbound path began their return with a short “compass run,” a quick scan of about 20 meters along the Earth’s north-south geomagnetic axis, reports the Miami Herald’s Mitchell Willetts. Because they don’t have any familiar visual landmarks to use, and dense vegetation at the study sites made “visual piloting unreliable,” the compass run helps the dogs recalibrate their own position to better estimate their “homing” direction.

Whether the dogs are aware that they are tapping into the Earth’s magnetic field is unclear. Many dogs also poop along a north-south axis, and they certainly are not the only animals to use it as a tool. Chinook salmon have magnetoreceptors in their skin that help guide their epic journeysfoxes use magnetism to hone in on underground prey; and, sea turtles use it to find their beachside birthplaces.

Catherine Lohmann, a biologist at the University of North Carolina, Chapel Hill, who studies magnetoreception and navigation in such turtles tells Erik Stokstad at Science that the finding of the compass run, however, is a first in dogs. This newfound ability means that they can likely remember the direction they had been pointing when they started, and then use the magnetic compass to find the most efficient way home.

To learn more about how magneto-location works for the dogs, the study authors will begin a new experiment placing magnets on the dogs’ collars to find out if this disrupts their navigational skills.Like this article? SIGN UP for our newsletter  

About Courtney Sexton

Courtney Sexton

Courtney Sexton, a writer and researcher based in Washington, DC, studies human-animal interactions. She is a 2020 AAAS Mass Media Fellow and the co-founder and director of The Inner Loop, a nonprofit organization for writers.

How Understanding Neurology Can Help You Cope With Change

PHOTO: GETTYHow Neurology Helps You Cope With ChangeDr. Kim RedmanExpert
SelfJuly 26, 2020

Is it time for a change? Here’s how to make it happen.

When it comes to changing your habits, did you know that you can directly affect your neurology and make your brain work for or against you?

Your self-judgment — saying things like, “I’m not smart enough, strong-willed enough, capable enough” — brings with it a corresponding chemical cocktail of neurotransmitters that reinforces your negative emotional state.

That’s bad news for the body. As its immune system becomes weakened, the effects of stress begin to add up.

RELATED: How To Plan For The Future When Things Are Constantly Changing

This is a preventable situation if you understand just a bit about strategies at the neurological level and what constitutes the neurology of change.

Understanding the neurology of change. 

To understand neurology, it helps to know that any skill you possess — from touching your finger to the tip of your nose, to playing tennis — only becomes a skill after you have strong neurology in place.

Strong neurology happens when a task or skill is repeated often. Children do this all the time. They’re not only building the physical coordination to accomplish their goal, they’re building neurology, as well.

Neuron highways, or routes that are used often and mostly unconsciously, can be thought of as habits. Any learned skill has a strategy or neuron map that accompanies it. This is the area that causes you so much trouble.

One-tenth of your brain is conscious, the other nine-tenths is unconscious, including your subconscious. The unconscious is the storehouse of your major learnings, emotions, strategies and values.

So with all good intentions, you begin to install a new program or strategy with your conscious minds for a skill you’d like to possess.

This installation works well until you forget to remember your new program and begin acting unconsciously in your old program… Which reinforces the fattest neuron path — your old habit.

The connection between strategies and emotions.

Strategies also have emotions attached to them, which can further complicate things.

RELATED: How To Tap Into The Power Of Breathwork To Survive Global Change

Let’s say that consciously you really want a new relationship in your life, but unconsciously relationships equal betrayal and devastation.

No matter how much you say you want a relationship in your life, somehow one never seems to appear. Worse yet, they may appear only to be emotionally unavailable or the “wrong type.”

There’s a pattern in place, and no matter how hard you try, you can’t change what you don’t know. In computer lingo, what needs to happen is like a “defrag,” with a download of a “patch.”ADVERTISING

There are many techniques to aid you in this, and it should be mentioned that sometimes the process does occur spontaneously.

When you feel differently, you behave differently.

You’ve been in challenging situations where you finally got that “aha” moment; that moment where you say, “I get it!” viscerally, emotionally, as well as intellectually.

You have a different perspective on the situation. You feel differently, and so you behave differently.

There are many professional modalities that focus on integrating the unconscious learnings to become conscious lessons. The sooner you achieve the “aha” moment and its accompanying neurological shift, the sooner you will experience feeling and behaving differently.

Clinical healing modalities for change.

Traditionally, the most effective and fastest of these are clinical hypnotherapy and Time Line Therapy. Many energetic modalities that work with the body-mind response and re-patterning are quite effective, also.

You have the power to change. 

The important thing to remember is that you have the power to choose to add to your external resources.

Another way of saying this is to ask yourself, “How badly do I want this?” If the answer is that you want it badly, ask yourself, “Do I want it badly enough to change? How I do what I do?”

If this answer is yes, congratulate yourself! You are more than halfway to your goal.

Simply reach out, grab a resource, and experience yourself the way you always hoped that you would — empowered and thriving.

RELATED: 4 Ways To Cope With Life Changes — Good & Bad

Sign Up for the YourTango Newsletter

Let’s make this a regular thing!

Dr. Kim is a board designated master trainer in Neuro-Linguistic Programming (NLP), hypnosis, and Time Line Therapy. For more information on how she can help you, contact her today.

Elon Musk says he’s terrified of AI taking over the world, and is most scared of Google’s ‘DeepMind’ AI project

Ben Gilbert 5 hours ago

elon musk
Elon Musk. 
  • Tesla and SpaceX CEO Elon Musk has repeatedly said that he thinks artificial intelligence poses a threat to humanity.
  • Of the companies working on AI technology, Musk is most concerned by the Google-owned DeepMind project, he said in a new interview with the New York Times.
  • “The nature of the AI that they’re building is one that crushes all humans at all games,” he said. “It’s basically the plotline in ‘WarGames.'”
  • In the 1983 film “WarGames,” starring Matthew Broderick, a supercomputer trained to test wartime scenarios is accidentally triggered to start a nuclear war.
  • Visit Business Insider’s homepage for more stories.

Billionaire Elon Musk has been sounding the alarm about the potentially dangerous, species-ending future of artificial intelligence for years now.

In 2016,  he warned that human beings could become the equivalent of “house cats” to new AI overlords. He has since repeatedly called for regulation and caution when it comes to new AI technology.

But, of all the various AI projects currently in the works, none has Musk more worried than Google’s DeepMind.

“Just the nature of the AI that they’re building is one that crushes all humans at all games,” Musk told the New York Times in a new interview. “I mean, it’s basically the plotline in ‘WarGames.'”

In “WarGames,” a teenage hacker played by Matthew Broderick connects to an AI-controlled government supercomputer trained to run war simulations. In attempting to play a game titled “Global Thermonuclear War,” the AI convinces government officials that a nuclear attack from the Soviet Union was imminent.

In the end (spoiler for those who haven’t seen the 37-year-old movie), the computer runs enough simulations of the potential end results of global thermonuclear war that it declares no winner to be possible, and that the only way to win is to not play. The 1983 film is a direct reflection of its time and place: fear in the US of nuclear war with the Soviet Union still looming, and fear of increasingly advanced technology.

But Musk wasn’t just talking about old films when he compared DeepMind to “WarGames” – he also said that AI could surpass human intelligence in the next five years, even if we don’t see the impact of it immediately. “That doesn’t mean that everything goes to hell in five years,” he said. “It just means that things get unstable or weird.”

Musk was an early investor in DeepMind, which sold to Google in 2014 for over $500 million, according to reports. Rather than seeking a return on investment, Musk said in a 2017 interview, he did it to keep an eye on burgeoning AI developments.

“It gave me more visibility into the rate at which things were improving, and I think they’re really improving at an accelerating rate, far faster than people realize,” he said in the 2017 interview. “Mostly because in everyday life you don’t see robots walking around. Maybe your Roomba or something. But Roombas aren’t going to take over the world.”

But Musk thinks artificial intelligence should have a different connotation.

“I think generally people underestimate the capability of AI — they sort of think it’s a smart human,” Musk said in a August 2019 talk with Alibaba CEO Jack Ma at the World AI Conference in Shanghai, China. “But it’s going to be much more than that. It will be much smarter than the smartest human.”

It is “hubris,” he said in the Times interview this week, that keeps “very smart people” from realizing the potential dangers of AI.

“My assessment about why AI is overlooked by very smart people is that very smart people do not think a computer can ever be as smart as they are. And this is hubris and obviously false.”

Brains manage neurons like air traffic controllers manage airplane movements

Our brains communicate information in a manner that can be likened to an air traffic controllerabout 24 hours ago By: The Conversation

MentalHealthStock imageThis article, written by Jérémie LefebvreL’Université d’Ottawa/University of Ottawa, originally appeared on The Conversation and has been republished here with permission:

Air traffic controllers monitor the movements of thousands of flights — taking into account the types of aircraft used and the cargo carried — to destinations in real time. As well, in order to properly co-ordinate arrivals and departures, aircraft speeds must be constantly adjusted. Without this constant control adhering to clear navigation rules, chaos would invade the airspace.

My research in neurophysiology and neuroscience has shown me how the brain is a rich and complex biological system. Every day, it faces the same situation as an air traffic controller but on a completely different scale: it has to manage the incessant traffic of signals that pass between billions of neurons and co-ordinate their pace constantly.

How does the brain do this?

Most of the volume of our brain is occupied by wires called axons, which form a complex network called white matter. Like a maze of airways linking cities around the world, white matter manages communication and co-ordination between the various areas where populations of neurons process information. These areas are located in different parts of the brain, sometimes close to each other, sometimes far away: this is the principle of distributed computing.

The faster, the better!

The control of traffic in the brain is crucial — the faster the information travels through the brain, the more efficiently the different areas of the brain co-operate to allow the proper functioning of memory and other aspects of cognition.

To maintain this incessant traffic, specialized cells called oligodendrocytes act as controllers by enveloping the axons with a substance called myelin. This myelin is a lipid (or fat) insulator with a characteristic pale colour, hence the name “white matter.” It allows the electrical signals of neurons to travel long distances without slowing down or losing intensity. However, myelin also offers an advantage to information passing through white matter: it allows signals to arrive on time, neither too early nor too late.

Today we know that because of its plasticity, the geography of the brain is constantly changing. However, research published in recent years have shown that white matter changes not only during development but also adaptively later, for example, during learning.

The rules of neural traffic

This type of plasticity had been observed mainly in the synapses of grey matter. It has now been shown that the structure of white matter constantly adapts and reorganizes itself. Through this form of plasticity, called adaptive myelination, the structure and properties of white matter are optimized. As a result, communication between neurons is maintained even when the brain’s size, activity and connections change. In fact, oligodendrocytes can adjust the amount of myelin to speed up or slow down the propagation of signals and maintain stable neuronal trafficking.

But how do white matter and its glial cells adapt to stabilize neuronal traffic and accomplish this incredible co-ordination challenge?

This question, like many concerning glial cells, is difficult to answer with traditional neuroimaging methods, but it’s of primary importance to a better understanding of neurodegenerative diseases. One example is multiple sclerosis, which causes myelin thinning and leads to a systemic disorganization of the flow of information in the brain, causing profound cognitive and motor disorders.

Co-ordinated neuronal activity

A recent interdisciplinary study provides a better understanding of the rules governing the control of neuronal traffic in white matter. It is important to note that neuron activity — a series of Morse code-like impulses — is not random. Rather, neurons tend to activate in groups and synchronize, generating waves or oscillations called brain rhythms. Researchers believe that in order to communicate with each other, different areas of the brain must be able to align and co-ordinate these rhythms.

New results obtained from human brain imaging data, combined with mathematical models, show that white matter reorganizes itself to optimize the alignment of these rhythms. To do this, it controls the speed at which these waves propagate through white matter by adjusting the amount of myelin present.

Oligodendrocytes therefore adapt the conductivity of axons to enable them to respond effectively to changing neuronal traffic demands and orchestrate the alignment between the oscillations present in different parts of the brain. They’re real cellular air traffic controllers!

Even the sick brain manages

Another surprising result is that the plastic properties of white matter also seem to allow the brain to adapt despite the presence of disease or injury. Indeed, it has been shown that white matter can reorganize in the presence of damage to preserve communication and synchronization between neurons, even if connections become either absent or damaged, for example in the presence of cancer.

Some experiments in animals have shown that preventing glial cells from adapting in the presence of injury limits recovery and causes many cognitive and behavioural problems.

The plasticity of white matter appears to be a key element of brain resilience and could therefore represent an interesting option for developing new therapeutic approaches, particularly in stroke victims. These new results highlight the importance of glial cells and white matter plasticity in the functioning and flexibility of cognitive processes.

Jérémie Lefebvre, Professeur agrégé de neurosciences computationnelles et neurophysiologie, L’Université d’Ottawa/University of Ottawa

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Long-term study reveals unique insights into how we change as we age

by Geoff McMaster, University of Alberta

Credit: CC0 Public Domain

In recent years, there has been a growing interest in the differences between generations and the sociological forces defining their worldviews and behavior. Stereotypes abound: the Silent Generation is inflexibly conventional, the boomers narcissistic, Gen X lazy. And millennials just take too long to grow up.

But few of these assumptions are grounded in sound sociological evidence. One of the few long-term studies that does closely examines at least one generation, Gen X, is the University of Alberta’s Edmonton Transitions Study (ETS).

Turning 35 this year, the ETS is the longest of its kind in Canada, shedding light on a number of life transitions from leaving high school to pursuing higher education, finding employment, starting a family, purchasing a home and advancing through a career.

The researchers first surveyed a group of almost 1,000 young Edmontonians in 1985 as they graduated from high school. Over the years, the group of respondents dwindled to about 400 as people drifted and were harder to find, but the cohort was surveyed four times between 18 and 25 to understand the complex transitions from education into the labor force, then again as they became young adults at age 32, again in 2010 when they were 43, and most recently in 2017 as they entered midlife (age 50) to study employment and family transitions.

The long-term study is a more scientific version of the Up film series, which checks in on the lives of 14 British children of different classes every seven years, since the age of seven in 1964. Starting with Seven Up!, the most recent in the series, 63 Up, aired on the BBC last fall.

The myth of midlife crisis

The biggest international headlines generated by ETS appeared in 2015, when researchers released findings questioning the popular assumption of a “mid-life crisis”—that happiness declines from the teenage years into middle age before increasing again. Instead, ETS participants, on average, only got happier as they aged. The “glory days” were not in the late teens as many believed.

“When we followed them over time, yes, happiness increased up to age 32 and kind of leveled off by 43. But it didn’t go down after that,” said psychologist Nancy Galambos, who shared authorship on the study with sociologist Harvey Krahn and human ecologist Matthew Johnson.

“I don’t know if it was a surprise so much as nobody had done it,” said Galambos, adding that economists had long assumed mid-life crisis was real, but had no longitudinal studies to back up their claims, relying instead on cross-sectional data of people at different ages studied at one point in time.

From two years to three decades

“A highlight of ETS is the ability to look at stability versus change, and how individuals change or grow over time and whether that matters for life outcomes like relationships, educational attainment, things like that, or if it’s just a question of who you are at a given point in time,” said Johnson.

The project was originally designed by Krahn and colleague Graham Lowe as a far less ambitious inquiry into job prospects facing Edmonton high-school graduates. Krahn had just started teaching at the U of A, and the study “fit in with my social justice concerns and the belief that inequality should be addressed and not ignored,” he said.

Lowe and Krahn surveyed about 1,000 students, returning two years later to see how their lives had changed.

“We thought two years was a long study,” said Krahn. “At the time, youth unemployment was higher than it had been since the Great Depression (peaking at over 19 percent in 1981-82), and a lot of people were wondering about the consequences of that at such a critical age.”

The researchers wanted to know whether unemployment led to reduced self-esteem, depression, perhaps even criminal behavior.

“The biggest thing we learned was that many simply stayed in school because the jobs weren’t out there,” said Krahn.

The two returned to their cohort two years later looking for more answers, and the project’s lifespan kept growing from there, eventually garnering support from the Social Sciences and Humanities Research Council of Canada and the Alberta government.

“When the subjects were 25 years old—seven years out—we were getting some interesting information about who had gone off to school and got degrees, who was in the labor force and so on,” said Krahn.

Depression and self-esteem

In the early 2000s, Galambos joined the team from the University of Victoria. She’d done similar work on developmental psychology, accumulating longitudinal data on self-esteem and depression. That naturally led to questions of happiness and well-being over time.

In addition to her groundbreaking mid-life study, “Up Not Down,” Galambos published an incisive paper called “Depression, Self-Esteem and Anger in Emerging Adulthood,” concluding that common depressive symptoms and expressed anger declined between the ages of 18 and 25, while self-esteem increased.

Important influences on this trajectory were the level of parents’ education—the more educated the parents, the faster depression and anger dissipated—and conflict with parents, which continued to fuel anger.

Women at 18 tended to be more depressed than men, but the gap narrowed by 25.

“Across time, increases in social support and marriage were associated with increased psychological well-being, whereas longer periods of unemployment were connected with higher depression and lower self-esteem,” said Galambos.

The values of Gen X and Gen Z

More recently, in 2012, Krahn led an ETS study comparing the work beliefs and values of generations X and Y, finding that differences were not as pronounced as commonly thought.

“I have a fairly strong opinion about these beliefs that generations are so different,” said Krahn. “Maybe it applies to people who grow up during very upsetting or very different times—the Depression or during the Second World War—but in general we’re not convinced differences are that large” when it comes to core values.

However, differences do emerge when it comes to reaching some of life’s major milestones. In a 2018 paper called “Quick, Uncertain, and Delayed Adults,” Krahn and his colleagues demonstrated that among Generation X, transitioning into adult roles—getting their first full-time job, leaving home, leaving school, getting married, purchasing a house and having their first child—all took longer than it did for previous generations.

“What seems to be an underlying driver is how long you stay in school, which makes a lot of sense,” said Krahn. “You delay marriage and the first job, but you get a better first job because you stay in school longer. The assumption in the popular media is that young people stay at home longer, not dealing with these transitions—suffering in their parents’ basements and stunting their growth. But in reality, the people that are taking the longest are doing the best.”

Biggest surprise

The most surprising results for Krahn came in a 2018 study showing that, over time, those with higher incomes grew less concerned with a range of social issues.

After surveying their initial cohort multiple times over 25 years, from ages 18 to 43, he found a decline in concern for five social problems—racial discrimination, treatment of Indigenous people, female job discrimination, unemployment and environmental pollution.

“There’s just been a long standing chicken-and-egg question in sociology and political science: do people really get more conservative when they get older, or is it a generation or cohort effect? What was most interesting was whether higher education actually changes the trajectory,” said Krahn. “We found that it didn’t. But what did shape it was people’s income—as your income rises, your concern for social issues declines.”

What’s next

At 35, the ETS shows no signs of slowing down. Other social scientists are asking for the collected data, and Margie Lachman—a psychologist from Brandeis University with expertise in lifespan development focusing on mid- and later life—has recently joined the team.

“I think each of us has at least one or two papers sketched out in our minds that we want to work on collectively,” said Krahn.

Galambos plans to continue following Generation X into retirement, looking to see whether happiness holds up into old age. For Johnson, the big questions concern family relationships, reflected in his recent study called “Stuck in the Middle With You: Predictors of Commitment in Midlife,” and another finding that delaying marriage can make you happier in the long run.

“I want to look at family background, how your parents influence,” said Johnson. “Things like demographics, immigration, parental education, relationships with parents—which of those is most influential on your life trajectory and for how long? Do you feel the influence of your parents all the way to age 50, or does that kind of peter out and your own life choices take over at a given point in time?”

Krahn is now retired, officially passing the project’s reins to others on his team. But that doesn’t mean his passion for research has at all diminished.

“I will continue with the research as long as I can,” he said. “I love the data analysis, and I’m not just being nice to my colleagues, but we’ve got a pretty good team. So that certainly keeps me motivated.”

As for his next study? “There’s a big assumption out there—if I’ve heard it once I’ve heard it a dozen times—that the average person will have seven careers in their lifetime. How do we know that?

“I’m pretty convinced we’ll find that’s a very stretched average.”

Explore furtherNew study challenges ‘mid-life crisis’ theory

More information: Edmonton Transitions Study: by University of Alberta

Ultra-low power brain implants find meaningful signal in grey matter noise

by University of Michigan

brain wave
Credit: CC0 Public Domain

By tuning into a subset of brain waves, University of Michigan researchers have dramatically reduced the power requirements of neural interfaces while improving their accuracy—a discovery that could lead to long-lasting brain implants that can both treat neurological diseases and enable mind-controlled prosthetics and machines.

The team, led by Cynthia Chestek, associate professor of biomedical engineering and core faculty at the Robotics Institute, estimated a 90% drop in power consumption of neural interfaces by utilizing their approach.

“Currently, interpreting brain signals into someone’s intentions requires computers as tall as people and lots of electrical power—several car batteries worth,” said Samuel Nason, first author of the study and a Ph.D. candidate in Chestek’s Cortical Neural Prosthetics Laboratory. “Reducing the amount of electrical power by an order of magnitude will eventually allow for at-home brain-machine interfaces.”

Neurons, the cells in our brains that relay information and action around the body, are noisy transmitters. The computers and electrodes used to gather neuron data are listening to a radio stuck in between stations. They must decipher actual content amongst the brain’s buzzing. Complicating this task, the brain is a firehose of this data, which increases the power and processing beyond the limits of safe implantable devices.

Currently, to predict complex behaviors such as grasping an item in a hand from neuron activity, scientists can use transcutaneous electrodes, or direct wiring through the skin to the brain. This is achievable with 100 electrodes that capture 20,000 signals per second, and enables feats such as reenabling an arm that was paralyzed or allowing someone with a prosthetic hand to feel how hard or soft an object is. But not only is this approach impractical outside of the lab environment, it also carries a risk of infection.

Some wireless implants, created using highly efficient, application-specific integrated circuits, can achieve almost equal performance as the transcutaneous systems. These chips can gather and transmit about 16,000 signals per second. However, they have yet to achieve consistent operation and their custom-built nature is a roadblock in getting approval as safe implants compared to industrial-made chips.

“This is a big leap forward,” Chestek said. “To get the high bandwidth signals we currently need for brain machine interfaces out wirelessly would be completely impossible given the power supplies of existing pacemaker-style devices.

To reduce power and data needs, researchers compress the brain signals. Focusing on neural activity spikes that cross a certain threshold of power, called threshold crossing rate or TCR, means less data needs to be processed while still being able to predict firing neurons. However, TCR requires listening to the full firehose of neuron activity to determine when a threshold is crossed, and the threshold itself can change not only from one brain to another but in the same brain on different days. This requires tuning the threshold, and additional hardware, battery and time to do so.

Compressing the data in another way, Chestek’s lab dialed in to a specific feature of neuron data: spiking-band power. SBP is an integrated set of frequencies from multiple neurons, between 300 and 1,000 Hz. By listening only to this range of frequencies and ignoring others, taking in data from a straw as opposed to a hose, the team found a highly accurate prediction of behavior with dramatically lower power needs.

Compared to transcutaneous systems, the team found the SBP technique to be just as accurate while taking in one-tenth as many signals, 2,000 versus 20,000 signals per second. Compared to other methods such as using a threshold crossing rate, the team’s approach not only requires much less raw data, but is also more accurate at predicting neuron firing, even among noise, and does not require tuning a threshold.

The team’s SBP method solves another problem limiting an implant’s useful life. Over time, an interfaces’ electrodes fail to read the signals among noise. However, because the technique performs just as well when a signal is half of what is required from other techniques like threshold crossings, implants could be left in place and used longer.

While new brain-machine interfaces can be developed to take advantage of the team’s method, their work also unlocks new capabilities for many existing devices by reducing the technical requirements to translate neurons to intentions.

“It turns out that many devices have been selling themselves short,” Nason said. “These existing circuits, using the same bandwidth and power, are now applicable to the whole realm of brain-machine interfaces.”

The study, “A low-power band of neuronal spiking activity dominated by local single units improves the performance of brain-machine interfaces,” is published in Nature Biomedical Engineering.

Explore furtherCould prosthetic limbs one day be controlled by human thought?

More information: A low-power band of neuronal spiking activity dominated by local single units improves the performance of brain–machine interfaces, Nature Biomedical Engineering (2020). DOI: 10.1038/s41551-020-0591-0 , information:Nature Biomedical EngineeringProvided by University of Michigan

Know sweat: scientists solve mystery behind body odour

University of York researchers trace the source of underarm aromas to a particular enzyme

Ian Sample Science editor @iansample

Mon 27 Jul 2020 10.00 BSTLast modified on Mon 27 Jul 2020 18.30 BST

A woman is seen sniffing her armpit
 The offending odours, known as thioalcohols, are released as a byproduct when microbes feast on other compounds they encounter on the skin. Photograph: SIphotography/Getty Images/iStockphoto

Scientists have unravelled the mysterious mechanism behind the armpit’s ability to produce the pungent smell of body odour.

Researchers at the University of York traced the source of underarm odour to a particular enzyme in a certain microbe that lives in the human armpit.

To prove the enzyme was the chemical culprit, the scientists transferred it to an innocent member of the underarm microbe community and noted – to their delight – that it too began to emanate bad smells.

The work paves the way for more effective deodorants and antiperspirants, the scientists believe, and suggests that humans may have inherited the mephitic microbes from our ancient primate ancestors.

The new feminist armpit hair revolution: half-statement, half-ornament


“We’ve discovered how the odour is produced,” said Prof Gavin Thomas, a senior microbiologist on the team. “What we really want to understand now is why.”

Humans do not produce the most pungent constituents of BO directly. The offending odours, known as thioalcohols, are released as a byproduct when microbes feast on other compounds they encounter on the skin.

The York team previously discovered that most microbes on the skin cannot make thioalcohols. But further tests revealed that one armpit-dwelling species, Staphylococcus hominis, was a major contributor. The bacteria produce the fetid fumes when they consume an odourless compound called Cys-Gly-3M3SH, which is released by sweat glands in the armpit.Advertisement

Humans come with two types of sweat glands. Eccrine glands cover the body and open directly onto the skin. They are an essential component of the body’s cooling system. Apocrine glands, on the other hand, open into hair follicles, and are crammed into particular places: the armpits, nipples and genitals. Their role is not so clear.

Writing in the journal Scientific Reports, the York scientists describe how they delved inside Staphylococcus hominis to learn how it made thioalcohols. They discovered an enzyme that converts Cys-Gly-3M3SH released by apocrine glands into the pungent thioalcohol, 3M3SH.


Thomas said: “The bacteria take up the molecule and eat some of it, but the rest they spit out, and that is one of the key molecules we recognise as body odour.”

Having discovered the “BO enzyme”, the researchers confirmed its role by transferring it into Staphylococcus aureus, a common relative that normally has no role in body odour. “Just by moving the gene in, we got Staphylococcus aureus that made body odour,” Thomas said.

“Our noses are extremely good at detecting these thioalcohols at extremely low thresholds, which is why they are really important for body odour. They have a very characteristic cheesy, oniony smell that you would recognise. They are incredibly pungent.”

The research, a collaboration with Unilever, raises new possibilities for deodorants that target only the most active BO-producing microbes while leaving the rest of the underarm microbial community untouched. “If you can have a more targeted approach that selectively knocks down Staphylococcus hominis, it could be longer lasting,” said Thomas.

Michelle Rudden and others on the study next looked at the genetic relationships between dozens of Staphylococcus species. The analysis suggests, tentatively, that only a handful inherited the BO enzyme from an ancient microbial ancestor about 60m years ago.

Since apocrine glands only secrete BO-making compounds from puberty onwards, the odours may have played a role in shaping humanity. “All we can say is this is not a new process. BO was definitely around while humans were evolving,” Thomas said. “It’s not impossible to imagine these were important in the evolution of humans. Before we started using deodorants and antiperspirants, in the last 50 to 100 years, everyone definitely smelled.”

Rock Pi 4 Model C: NVMe and eMMC in a Raspberry Pi Layout

By Les Pounder 8 hours ago

It looks like a Raspberry Pi but this board has something more to offer.

RockPi 4C Board

(Image credit: Radxa)

Originally announced in October 2019, Rock Pi 4 Model C from Radxa has been unavailable for purchase until now. CNX Software have keenly spotted that the Rock Pi 4 Model C is now available from $59. Rock Pi 4 Model C is a design hybrid, measuring 3.3 x 2.1 inch (85 x  54 mm) this board shares layout cues from the Raspberry Pi 3 and 4 but it has a little more to it than a Raspberry Pi.

Rock Pi 4C Specifications

  • SoC – Rockchip RK3399 big.LITTLE hexa-core processor with
  • CPU: 2x Arm Cortex-A72 @ up to 1.8 GHz, 4x Cortex-A53 @ up to 1.4 GHz
  • GPU: Mali-T864 with support OpenGL ES1.1/2.0/3.0/3.1/3.2, Vulkan 1.0, OpenVG1.1, OpenCL 1.1/1.2, DX11, and AFBC
  • VPU with 4K VP9 and 4K 10-bit H265/H264 decoding
  • 64-bit 4GB LPDDR4 @ 3200 Mbps (single chip)
  • eMMC module socket up to 128GB
  • Micro SD card slot up to 2TB
  • 4-lane M.2 NVMe SSD socket (expansion board required)
  • Micro HDMI 2.0a up to 4K @ 60 Hz
  • Mini DP 1.2 up to 2560 x 1440 @ 60 Hz
  • Audio – Via HDMI and 3.5mm audio jack
  • Camera – MIPI-CSI2 connector for camera up to 8MP
  • Connectivity – Gigabit Ethernet with PoE support (add on required), dual-band 802.11b/g/n/ac WiFi 5, Bluetooth 5.0 with on-board antenna
  • USB – 1x USB 3.0 host port, 1x USB 3.0 OTG port, 2x USB 2.0 host ports
  • Expansion – 40-pin I/O header with 1x UART, 2x SPI bus, 2x I2C bus,  1x PCM/I2S, 1x SPDIF, 1x PWM, 1x ADC, 6x GPIO, and power signals (5V, 3.3V, and GND)
  • Misc – RTC with connector for backup battery
  • Power Supply – Via USB-C port supporting USB PD 2.0 (9V/2A, 12V/2A, 15V/2A, or 20V/2A) and Qualcomm Quick Charge 3.0/.0 (9V/2A, 12V/1.5A)
RockPi 4C Board
(Image credit: Radxa)

The Rockchip RK3399 hexa-core processor features a dual core Arm Cortex A72 1.8GHz, the same A72 CPU as found in the Raspberry Pi 4, but the Raspberry Pi has a quad core CPU running at 1.5GHz, so with less cores but more speed will there be much difference in performance? We will need to get a hold of a unit for test! 

There are dual display outputs, micro HDMI and mini DisplayPort providing 4K @ 60Hz and 2560 x 1440 @ 60Hz respectively. Connectivity comes in the form of four USB ports, two USB 3.0, two USB2.0 and Gigabit Ethernet, so Raspberry Pi 4 connections, in a Raspberry Pi 3 layout. Wireless connectivity is  802.11 b/g/n/ac WiFi 5 and Bluetooth 5.0 via an on-board antenna, but there is an external antenna connection. A 40 pin GPIO is present, providing access to UART, SPI, I2C, PCM /I2S, SPDIF, PWM and an Analog to Digital Converter, something not found on the Raspberry Pi GPIO.

What is different about the Rock Pi 4 Model C is storage. Sure we have the usual microSD card slot, but we also have an eMMC socket which can be used with up to 128GB modules, purchased separately. There is also a four lane M.2 NVMe socket that is used with an expansion board to bring NVMe storage. Much faster and more reliable than microSD cards.

We can’t wait to get one of these on our bench for testing.

Get notifications from Tom’s Hardware?