https://www.gearpatrol.com/home/a36830942/best-home-design-releases-june-29-2021/

A Sleeping Pillow for Picky Sleepers And 6 Other New Home Releases

Soft? Firm? This Pluto pillow does it all.

BY TYLER CHINJUN 29, 2021window shopping purple pillow, table cloth, dad grassCourtesy

Welcome to Window Shopping, a weekly exercise in lusting over home products we want in our homes right the hell now. This week: frozen soup dumplings, a tote bag for hauling bottles to the park and more.

Purple TwinCloud Pillow

purple twincloud pillow

Purple

The adjustable pillow market just got a new type of pillow, and it’s from one of the leading mattress brands, Purple. While adjustable pillows either fall into two categories — loose fill or swappable inserts — Purple’s new TwinCloud pillow is a little more interesting. The down-alternative pillow is basically an extra-long pillow that folds in half and zips together to create a standard-sized pillow. One side is soft, the other firm, and you get to choose which is more to you liking. The fully washable pillow can be zipped with others to create multiple configurations like a body pillow or even a mattress topper.ADVERTISEMENT – CONTINUE READING BELOW

Price: $89

SHOP NOW

Minna Tablecloths

minna tablecloth

Minna

Based out of Hudson, New York, Minna is an ethically made homewares brand that works with artisans from around the world. Its latest release is a line of tablecloths, handwoven by a family-run workshop of flying shuttle loom weavers in Mexico. The neutral tablecloths are “inspired by the play between shadow and light,” as the product description explains. Complementing the tablecloths are reusable cloth napkins, which retail for $20 each.

Price: $145

SHOP NOW

Haus The Picnic Set

haus the picnic set tote and alcohol

HausADVERTISEMENT – CONTINUE READING BELOW

One of our favorite low-ABV booze brands, Haus, made a picnic set that’s ready to go for all your, well, picnics and outdoor gatherings. Score a bottle of your favorite Haus aperitif and a fresh heavy-duty Haus tote bag, designed to house two upright bottles. Let people know that you take your low-ABV spirits seriously.

Price: $60

SHOP NOW

Xiao Chi Jie Soup Dumplings

xiao chi jie soup dumplings

XCJ

Seattle-based restaurant Xiao Chi Jie is known for its soup dumplings, and because of the pandemic, it started selling its frozen soup dumplings nationwide, as well as sauces and bamboo steamers. Available in packs of 50 and available in pork, shrimp and pork and chicken, the soup dumplings are ready to eat in 10 minutes and come with steamer liners so they don’t stick to the tray. Get them while they’re hot, er, frozen, because soup dumplings are notoriously hard to make without lots of experience.

Price: $40+

SHOP NOW

Airbnb Host Essentials by Muji

airbnb host essentials by muji

MujiADVERTISEMENT – CONTINUE READING BELOW

People are getting ready to travel again, and Airbnb hosts better be prepared for the influx of travelers. At least Muji has those hosts covered. With the new Airbnb Host Essentials by Muji, Airbnb hosts (or those just looking to stock up on home essentials) can get 23 Muji products guaranteed (not really) to get them five stars. The $400 set includes the essentials to accommodate two guests, with items ranging from bathroom necessities to kitchen and dining goods. We can’t guarantee your Airbnb will look as good as a Muji hotel, but it’ll be damn close with Muji stuff.

Price: $400

SHOP NOW

Momofuku x East Fork

momofuku x east fork

East Fork

The latest collection between Momofuku and East Fork brings two new glazes — orchard and peachy keen — to the latter’s enviable hand-thrown pottery. Surprisingly, the plates, cups and dishes are still in stock, but we’re pretty sure it won’t be for long.

Price: $12

SHOP NOW

Dad Grass July 4th Collection

dad grass july 4th collection smoke pack and shirt

Dad Grass

When you light up on July 4th, do so to more than just fireworks. CBD joint brand Dad Grass dropped a July 4th-inspired collection, which includes a t-shirt, as well as a new stash box to hide your stash. No, those aren’t actually M-80s in there, just some good times rolled up into a joint.

Price: $37

https://www.bnnbloomberg.ca/the-last-and-only-foreign-scientist-in-the-wuhan-lab-speaks-out-1.1622483

The Last–And Only–Foreign Scientist in the Wuhan Lab Speaks Out

Michelle Cortez, Bloomberg News

Danielle Anderson Photographer: James Bugg/Bloomberg

Danielle Anderson Photographer: James Bugg/Bloomberg , Bloomberg

(Bloomberg) — Danielle Anderson was working in what has become the world’s most notorious laboratory just weeks before the first known cases of Covid-19 emerged in central China. Yet, the Australian virologist still wonders what she missed.

An expert in bat-borne viruses, Anderson is the only foreign scientist to have undertaken research at the Wuhan Institute of Virology’s BSL-4 lab, the first in mainland China equipped to handle the planet’s deadliest pathogens. Her most recent stint ended in November 2019, giving Anderson an insider’s perspective on a place that’s become a flashpoint in the search for what caused the worst pandemic in a century.

The emergence of the coronavirus in the same city where institute scientists, clad head-to-toe in protective gear, study that exact family of viruses has stoked speculation that it might have leaked from the lab, possibly via an infected staffer or a contaminated object. China’s lack of transparency since the earliest days of the outbreak fueled those suspicions, which have been seized on by the U.S. That’s turned the quest to uncover the origins of the virus, critical for preventing future pandemics, into a geopolitical minefield.

The work of the lab and the director of its emerging infectious diseases section—Shi Zhengli, a long-time colleague of Anderson’s dubbed ‘Batwoman’ for her work hunting viruses in caves—is now shrouded in controversy. The U.S. has questioned the lab’s safety and alleged its scientists were engaged in contentious gain of function research that manipulated viruses in a manner that could have made them more dangerous.

It’s a stark contrast to the place Anderson described in an interview with Bloomberg News, the first in which she’s shared details about working at the lab.

Half-truths and distorted information have obscured an accurate accounting of the lab’s functions and activities, which were more routine than how they’ve been portrayed in the media, she said.

“It’s not that it was boring, but it was a regular lab that worked in the same way as any other high-containment lab,” Anderson said. “What people are saying is just not how it is.”

Now at Melbourne’s Peter Doherty Institute for Infection and Immunity, Anderson began collaborating with Wuhan researchers in 2016, when she was scientific director of the biosafety lab at Singapore’s Duke-NUS Medical School. Her research—which focuses on why lethal viruses like Ebola and Nipah cause no disease in the bats in which they perpetually circulate—complemented studies underway at the Chinese institute, which offered funding to encourage international collaboration.

A rising star in the virology community, Anderson, 42, says her work on Ebola in Wuhan was the realization of a life-long career goal. Her favorite movie is “Outbreak,” the 1995 film in which disease experts respond to a dangerous new virus—a job Anderson said she wanted to do. For her, that meant working on Ebola in a high-containment laboratory. 

Anderson’s career has taken her all over the world. After obtaining an undergraduate degree from Deakin University in Geelong, Australia, she worked as a lab technician at the Dana-Farber Cancer Institute in Boston, then returned to Australia to complete a PhD under the supervision of eminent virologists John Mackenzie and Linfa Wang. She did post-doctoral work in Montreal, before moving to Singapore and working again with Wang, who described Anderson as “very committed and dedicated,” and similar in personality to Shi. 

“They’re both very blunt with such high moral standards,” Wang said by phone from Singapore, where he’s the director of the emerging infectious diseases program at the Duke-NUS Medical School. “I’m very proud of what Danielle’s been able to do.”

On the Ground

Anderson was on the ground in Wuhan when experts believe the virus, now known as SARS-CoV-2, was beginning to spread. Daily visits for a period in late 2019 put her in close proximity to many others working at the 65-year-old research center. She was part of a group that gathered each morning at the Chinese Academy of Sciences to catch a bus that shuttled them to the institute about 20 miles away.

As the sole foreigner, Anderson stood out, and she said the other researchers there looked out for her.

“We went to dinners together, lunches, we saw each other outside of the lab,” she said.

From her first visit before it formally opened in 2018, Anderson was impressed with the institute’s maximum biocontainment lab. The concrete, bunker-style building has the highest biosafety designation, and requires air, water and waste to be filtered and sterilized before it leaves the facility. There were strict protocols and requirements aimed at containing the pathogens being studied, Anderson said, and researchers underwent 45 hours of training to be certified to work independently in the lab.

The induction process required scientists to demonstrate their knowledge of containment procedures and their competency in wearing air-pressured suits. “It’s very, very extensive,” Anderson said.

Entering and exiting the facility was a carefully choreographed endeavor, she said. Departures were made especially intricate by a requirement to take both a chemical shower and a personal shower—the timings of which were precisely planned.

Special Disinfectants

These rules are mandatory across BSL-4 labs, though Anderson noted differences compared with similar facilities in Europe, Singapore and Australia in which she’s worked. The Wuhan lab uses a bespoke method to make and monitor its disinfectants daily, a system Anderson was inspired to introduce in her own lab. She was connected via a headset to colleagues in the lab’s command center to enable constant communication and safety vigilance—steps designed to ensure nothing went awry.

However, the Trump administration’s focus in 2020 on the idea the virus escaped from the Wuhan facility suggested that something went seriously wrong at the institute, the only one to specialize in virology, viral pathology and virus technology of the some 20 biological and biomedical research institutes of the Chinese Academy of Sciences.

Virologists and infectious disease experts initially dismissed the theory, noting that viruses jump from animals to humans with regularity. There was no clear evidence from within SARS-CoV-2’s genome that it had been artificially manipulated, or that the lab harbored progenitor strains of the pandemic virus. Political observers suggested the allegations had a strategic basis and were designed to put pressure on Beijing.

And yet, China’s actions raised questions. The government refused to allow international scientists into Wuhan in early 2020 when the outbreak was mushrooming, including experts from the U.S. Centers for Disease Control and Prevention, who were already in the region.

Beijing stonewalled on allowing World Health Organization experts into Wuhan for more than a year, and then provided only limited access. The WHO team’s final report, written with and vetted by Chinese researchers, played down the possibility of a lab leak. Instead, it said the virus probably spread via a bat through another animal, and gave some credence to a favored Chinese theory that it could have been transferred via frozen food.

Never Sick

China’s obfuscation led outside researchers to reconsider their stance. Last month, 18 scientists writing in the journal Science called for an investigation into Covid-19’s origins that would give balanced consideration to the possibility of a lab accident. Even the director-general of the WHO, Tedros Adhanom Ghebreyesus, said the lab theory hadn’t been studied extensively enough.

But it’s U.S. President Joe Biden’s consideration of the idea—previously dismissed by many as a Trumpist conspiracy theory—that has given it newfound legitimacy. Biden called on America’s intelligence agencies last month to redouble their efforts in rooting out the genesis of Covid-19 after an earlier report, disclosed by the Wall Street Journal, claimed three researchers from the lab were hospitalized with flu-like symptoms in November 2019.

What the World Wants China to Disclose in Wuhan Lab Leak Probe

Anderson said no one she knew at the Wuhan institute was ill toward the end of 2019. Moreover, there is a procedure for reporting symptoms that correspond with the pathogens handled in high-risk containment labs.

“If people were sick, I assume that I would have been sick—and I wasn’t,” she said. “I was tested for coronavirus in Singapore before I was vaccinated, and had never had it.”

Not only that, many of Anderson’s collaborators in Wuhan came to Singapore at the end of December for a gathering on Nipah virus. There was no word of any illness sweeping the laboratory, she said.

“There was no chatter,” Anderson said. “Scientists are gossipy and excited. There was nothing strange from my point of view going on at that point that would make you think something is going on here.”

The names of the scientists reported to have been hospitalized haven’t been disclosed. The Chinese government and Shi Zhengli, the lab’s now-famous bat-virus researcher, have repeatedly denied that anyone from the facility contracted Covid-19. Anderson’s work at the facility, and her funding, ended after the pandemic emerged and she focused on the novel coronavirus. 

‘I’m Not Naive’

It’s not that it’s impossible the virus spilled from there. Anderson, better than most people, understands how a pathogen can escape from a laboratory. SARS, an earlier coronavirus that emerged in Asia in 2002 and killed more than 700 people, subsequently made its way out of secure facilities a handful of times, she said. 

If presented with evidence that such an accident spawned Covid-19, Anderson “could foresee how things could maybe happen,” she said. “I’m not naive enough to say I absolutely write this off.” 

And yet, she still believes it most likely came from a natural source. Since it took researchers almost a decade to pin down where in nature the SARS pathogen emerged, Anderson says she’s not surprised they haven’t found the “smoking gun” bat responsible for the latest outbreak yet. 

The Wuhan Institute of Virology is large enough that Anderson said she didn’t know what everyone was working on at the end of 2019. She is aware of published research from the lab that involved testing viral components for their propensity to infect human cells. Anderson is convinced no virus was made intentionally to infect people and deliberately released—one of the more disturbing theories to have emerged about the pandemic’s origins.

Gain of Function

Anderson did concede that it would be theoretically possible for a scientist in the lab to be working on a gain of function technique to unknowingly infect themselves and to then unintentionally infect others in the community. But there’s no evidence that occurred and Anderson rated its likelihood as exceedingly slim.

Getting authorization to create a virus in this way typically requires many layers of approval, and there are scientific best practices that put strict limits on this kind of work. For example, a moratorium was placed on research that could be done on the 1918 Spanish Flu virus after scientists isolated it decades later.

Even if such a gain of function effort got clearance, it’s hard to achieve, Anderson said. The technique is called reverse genetics.

“It’s exceedingly difficult to actually make it work when you want it to work,” she said.

Anderson’s lab in Singapore was one of the first to isolate SARS-CoV-2 from a Covid patient outside China and then to grow the virus. It was complicated and challenging, even for a team used to working with coronaviruses that knew its biological characteristics, including which protein receptor it targets. These key facets wouldn’t be known by anyone trying to craft a new virus, she said. Even then, the material that researchers study—the virus’s basic building blocks and genetic fingerprint—aren’t initially infectious, so they would need to culture significant amounts to infect people.

Despite this, Anderson does think an investigation is needed to nail down the virus’s origin once and for all. She’s dumbfounded by the portrayal of the lab by some media outside China, and the toxic attacks on scientists that have ensued.

One of a dozen experts appointed to an international taskforce in November to study the origins of the virus, Anderson hasn’t sought public attention, especially since being targeted by U.S. extremists in early 2020 after she exposed false information about the pandemic posted online. The vitriol that ensued prompted her to file a police report. The threats of violence many coronavirus scientists have experienced over the past 18 months have made them hesitant to speak out because of the risk that their words will be misconstrued.

The elements known to trigger infectious outbreaks—the mixing of humans and animals, especially wildlife—were present in Wuhan, creating an environment conducive for the spillover of a new zoonotic disease. In that respect, the emergence of Covid-19 follows a familiar pattern. What’s shocking to Anderson is the way it unfurled into a global contagion.

“The pandemic is something no one could have imagined on this scale,” she said. Researchers must study Covid’s calamitous path to determine what went wrong and how to stop the spread of future pathogens with pandemic potential.

“The virus was in the right place at the right time and everything lined up to cause this disaster.”

https://medicalxpress.com/news/2021-06-self-reported-declines-cognition-linked-brain.html


Self-reported declines in cognition may be linked to changes in brain connectivity

by Wayne State University

brain
Credit: Pixabay/CC0 Public Domain

Jessica Damoiseaux, Ph.D., an associate professor with the Institute of Gerontology at Wayne State University, recently published the results of a three-year study of cognitive changes in older adults. The team followed 69 primarily African American females, ages 50 to 85, who complained that their cognitive ability was worsening though clinical assessments showed no impairments. Three magnetic resonance imaging scans (MRIs) at 18-month intervals showed significant changes in functional connectivity in two areas of the brain.

“An older adult’s perceived cognitive decline could be an important precursor to dementia,” Damoiseaux said. “Brain alterations that underlie the experience of decline could reflect the progression of incipient dementia and may emerge before cognitive assessment is sensitive enough to detect a deficit.”

The resulting paper, “Longitudinal change in hippocampal and dorsal anterior insulae functional connectivity in subjective cognitive decline,” appeared in the May 31 issue of Alzheimer’s Research & Therapy. Damoiseaux conducted the study with graduate student Raymond Viviano, Ph.D., who is first author.

Subjective cognitive decline, defined as a perceived worsening of cognitive ability not noted on clinical assessment, may be an early indicator of dementia. Previous cross-sectional research has demonstrated aberrant brain functional connectivity in subjective cognitive decline, but longitudinal evaluation has been limited.

Viviano and Damoiseaux’s three-year study found that persons reporting more subjective cognitive decline showed a larger decrease in connectivity between components of the default mode network and a larger increase in connectivity between salience and default mode network components. The functional connectivity changed in the absence of a change in cognitive performance.

Since these brain changes occurred without concomitant cognitive changes, they could indicate that brain changes underlie the perception of decline. These changes could be a sensitive marker for nascent dementia months or years before assessments detect any cognitive deficit.


Explore furtherCognitive fatigue changes functional connectivity in brain’s fatigue network


More information: Raymond P. Viviano et al, Longitudinal change in hippocampal and dorsal anterior insulae functional connectivity in subjective cognitive decline, Alzheimer’s Research & Therapy (2021). DOI: 10.1186/s13195-021-00847-yJournal information:Alzheimer\’s Research & Therapy

https://spectrum.ieee.org/computing/hardware/the-future-of-deep-learning-is-photonic

The Future of Deep Learning Is Photonic

Computing with light could slash the energy needs of neural networks

By Ryan HamerlyAdvertisementEditor’s PicksDeep Learning at the Speed of LightThe Death of Moore’s Law Will Spur InnovationA Neural-Net Based on Light Could Best Digital Computers

Suggested Wiley-IEEE Reading

Computational Models for Cognitive Vision

Image of a computer rendering.
Alexander Sludds

Think of the many tasks to which computers are being applied that in the not-so-distant past required human intuition. Computers routinely identify objects in images, transcribe speech, translate between languages, diagnose medical conditions, play complex games, and drive cars.

The technique that has empowered these stunning developments is called deep learning, a term that refers to mathematical models known as artificial neural networks. Deep learning is a subfield of machine learning, a branch of computer science based on fitting complex models to data.

While machine learning has been around a long time, deep learning has taken on a life of its own lately. The reason for that has mostly to do with the increasing amounts of computing power that have become widely available—along with the burgeoning quantities of data that can be easily harvested and used to train neural networks.

The amount of computing power at people’s fingertips started growing in leaps and bounds at the turn of the millennium, when graphical processing units (GPUs) began to be harnessed for nongraphical calculations, a trend that has become increasingly pervasive over the past decade. But the computing demands of deep learning have been rising even faster. This dynamic has spurred engineers to develop electronic hardware accelerators specifically targeted to deep learning, Google’s Tensor Processing Unit (TPU) being a prime example.

Here, I will describe a very different approach to this problem—using optical processors to carry out neural-network calculations with photons instead of electrons. To understand how optics can serve here, you need to know a little bit about how computers currently carry out neural-network calculations. So bear with me as I outline what goes on under the hood.

Almost invariably, artificial neurons are constructed using special software running on digital electronic computers of some sort. That software provides a given neuron with multiple inputs and one output. The state of each neuron depends on the weighted sum of its inputs, to which a nonlinear function, called an activation function, is applied. The result, the output of this neuron, then becomes an input for various other neurons.Reducing the energy needs of neural networks might require computing with light

For computational efficiency, these neurons are grouped into layers, with neurons connected only to neurons in adjacent layers. The benefit of arranging things that way, as opposed to allowing connections between any two neurons, is that it allows certain mathematical tricks of linear algebra to be used to speed the calculations.

While they are not the whole story, these linear-algebra calculations are the most computationally demanding part of deep learning, particularly as the size of the network grows. This is true for both training (the process of determining what weights to apply to the inputs for each neuron) and for inference (when the neural network is providing the desired results).

What are these mysterious linear-algebra calculations? They aren’t so complicated really. They involve operations on matrices, which are just rectangular arrays of numbers—spreadsheets if you will, minus the descriptive column headers you might find in a typical Excel file.

This is great news because modern computer hardware has been very well optimized for matrix operations, which were the bread and butter of high-performance computing long before deep learning became popular. The relevant matrix calculations for deep learning boil down to a large number of multiply-and-accumulate operations, whereby pairs of numbers are multiplied together and their products are added up.

Over the years, deep learning has required an ever-growing number of these multiply-and-accumulate operations. Consider LeNet, a pioneering deep neural network, designed to do image classification. In 1998 it was shown to outperform other machine techniques for recognizing handwritten letters and numerals. But by 2012 AlexNet, a neural network that crunched through about 1,600 times as many multiply-and-accumulate operations as LeNet, was able to recognize thousands of different types of objects in images.

Advancing from LeNet’s initial success to AlexNet required almost 11 doublings of computing performance. During the 14 years that took, Moore’s law provided much of that increase. The challenge has been to keep this trend going now that Moore’s law is running out of steam. The usual solution is simply to throw more computing resources—along with time, money, and energy—at the problem.

Multiplying With Light

Illustration of two beams using a beam splitter.
David Schneider

As a result, training today’s large neural networks often has a significant environmental footprint. One 2019 study found, for example, that training a certain deep neural network for natural-language processing produced five times the CO2 emissions typically associated with driving an automobile over its lifetime.

Improvements in digital electronic computers allowed deep learning to blossom, to be sure. But that doesn’t mean that the only way to carry out neural-network calculations is with such machines. Decades ago, when digital computers were still relatively primitive, some engineers tackled difficult calculations using analog computers instead. As digital electronics improved, those analog computers fell by the wayside. But it may be time to pursue that strategy once again, in particular when the analog computations can be done optically.

It has long been known that optical fibers can support much higher data rates than electrical wires. That’s why all long-haul communication lines went optical, starting in the late 1970s. Since then, optical data links have replaced copper wires for shorter and shorter spans, all the way down to rack-to-rack communication in data centers. Optical data communication is faster and uses less power. Optical computing promises the same advantages.

But there is a big difference between communicating data and computing with it. And this is where analog optical approaches hit a roadblock. Conventional computers are based on transistors, which are highly nonlinear circuit elements—meaning that their outputs aren’t just proportional to their inputs, at least when used for computing. Nonlinearity is what lets transistors switch on and off, allowing them to be fashioned into logic gates. This switching is easy to accomplish with electronics, for which nonlinearities are a dime a dozen. But photons follow Maxwell’s equations, which are annoyingly linear, meaning that the output of an optical device is typically proportional to its inputs.

The trick is to use the linearity of optical devices to do the one thing that deep learning relies on most: linear algebra.

To illustrate how that can be done, I’ll describe here a photonic device that, when coupled to some simple analog electronics, can multiply two matrices together. Such multiplication combines the rows of one matrix with the columns of the other. More precisely, it multiplies pairs of numbers from these rows and columns and adds their products together—the multiply-and-accumulate operations I described earlier. My MIT colleagues and I published a paper about how this could be done in 2019. We’re working now to build such an optical matrix multiplier.Optical data communication is faster and uses less power. Optical computing promises the same advantages.

The basic computing unit in this device is an optical element called a beam splitter. Although its makeup is in fact more complicated, you can think of it as a half-silvered mirror set at a 45-degree angle. If you send a beam of light into it from the side, the beam splitter will allow half that light to pass straight through it, while the other half is reflected from the angled mirror, causing it to bounce off at 90 degrees from the incoming beam.

Now shine a second beam of light, perpendicular to the first, into this beam splitter so that it impinges on the other side of the angled mirror. Half of this second beam will similarly be transmitted and half reflected at 90 degrees. The two output beams will combine with the two outputs from the first beam. So this beam splitter has two inputs and two outputs.

To use this device for matrix multiplication, you generate two light beams with electric-field intensities that are proportional to the two numbers you want to multiply. Let’s call these field intensities x and y. Shine those two beams into the beam splitter, which will combine these two beams. This particular beam splitter does that in a way that will produce two outputs whose electric fields have values of (x + y)/√2 and (x − y)/√2.

In addition to the beam splitter, this analog multiplier requires two simple electronic components—photodetectors—to measure the two output beams. They don’t measure the electric field intensity of those beams, though. They measure the power of a beam, which is proportional to the square of its electric-field intensity.

Why is that relation important? To understand that requires some algebra—but nothing beyond what you learned in high school. Recall that when you square (x + y)/√2 you get (x2 + 2xy + y2)/2. And when you square (x − y)/√2, you get (x2 − 2xy + y2)/2. Subtracting the latter from the former gives 2xy.

Pause now to contemplate the significance of this simple bit of math. It means that if you encode a number as a beam of light of a certain intensity and another number as a beam of another intensity, send them through such a beam splitter, measure the two outputs with photodetectors, and negate one of the resulting electrical signals before summing them together, you will have a signal proportional to the product of your two numbers.

Image of simulations of the Mach-Zehnder interferometer.
Lightmatter

My description has made it sound as though each of these light beams must be held steady. In fact, you can briefly pulse the light in the two input beams and measure the output pulse. Better yet, you can feed the output signal into a capacitor, which will then accumulate charge for as long as the pulse lasts. Then you can pulse the inputs again for the same duration, this time encoding two new numbers to be multiplied together. Their product adds some more charge to the capacitor. You can repeat this process as many times as you like, each time carrying out another multiply-and-accumulate operation.

Using pulsed light in this way allows you to perform many such operations in rapid-fire sequence. The most energy-intensive part of all this is reading the voltage on that capacitor, which requires an analog-to-digital converter. But you don’t have to do that after each pulse—you can wait until the end of a sequence of, say, N pulses. That means that the device can perform N multiply-and-accumulate operations using the same amount of energy to read the answer whether N is small or large. Here, N corresponds to the number of neurons per layer in your neural network, which can easily number in the thousands. So this strategy uses very little energy.

Sometimes you can save energy on the input side of things, too. That’s because the same value is often used as an input to multiple neurons. Rather than that number being converted into light multiple times—consuming energy each time—it can be transformed just once, and the light beam that is created can be split into many channels. In this way, the energy cost of input conversion is amortized over many operations.

Splitting one beam into many channels requires nothing more complicated than a lens, but lenses can be tricky to put onto a chip. So the device we are developing to perform neural-network calculations optically may well end up being a hybrid that combines highly integrated photonic chips with separate optical elements.

I’ve outlined here the strategy my colleagues and I have been pursuing, but there are other ways to skin an optical cat. Another promising scheme is based on something called a Mach-Zehnder interferometer, which combines two beam splitters and two fully reflecting mirrors. It, too, can be used to carry out matrix multiplication optically. Two MIT-based startups, Lightmatter and Lightelligence, are developing optical neural-network accelerators based on this approach. Lightmatter has already built a prototype that uses an optical chip it has fabricated. And the company expects to begin selling an optical accelerator board that uses that chip later this year.

Another startup using optics for computing is Optalysis, which hopes to revive a rather old concept. One of the first uses of optical computing back in the 1960s was for the processing of synthetic-aperture radar data. A key part of the challenge was to apply to the measured data a mathematical operation called the Fourier transform. Digital computers of the time struggled with such things. Even now, applying the Fourier transform to large amounts of data can be computationally intensive. But a Fourier transform can be carried out optically with nothing more complicated than a lens, which for some years was how engineers processed synthetic-aperture data. Optalysis hopes to bring this approach up to date and apply it more widely.Theoretically, photonics has the potential to accelerate deep learning by several orders of magnitude.

There is also a company called Luminous, spun out of Princeton University, which is working to create spiking neural networks based on something it calls a laser neuron. Spiking neural networks more closely mimic how biological neural networks work and, like our own brains, are able to compute using very little energy. Luminous’s hardware is still in the early phase of development, but the promise of combining two energy-saving approaches—spiking and optics—is quite exciting.

There are, of course, still many technical challenges to be overcome. One is to improve the accuracy and dynamic range of the analog optical calculations, which are nowhere near as good as what can be achieved with digital electronics. That’s because these optical processors suffer from various sources of noise and because the digital-to-analog and analog-to-digital converters used to get the data in and out are of limited accuracy. Indeed, it’s difficult to imagine an optical neural network operating with more than 8 to 10 bits of precision. While 8-bit electronic deep-learning hardware exists (the Google TPU is a good example), this industry demands higher precision, especially for neural-network training.

There is also the difficulty integrating optical components onto a chip. Because those components are tens of micrometers in size, they can’t be packed nearly as tightly as transistors, so the required chip area adds up quickly. A 2017 demonstration of this approach by MIT researchers involved a chip that was 1.5 millimeters on a side. Even the biggest chips are no larger than several square centimeters, which places limits on the sizes of matrices that can be processed in parallel this way.

There are many additional questions on the computer-architecture side that photonics researchers tend to sweep under the rug. What’s clear though is that, at least theoretically, photonics has the potential to accelerate deep learning by several orders of magnitude.

Based on the technology that’s currently available for the various components (optical modulators, detectors, amplifiers, analog-to-digital converters), it’s reasonable to think that the energy efficiency of neural-network calculations could be made 1,000 times better than today’s electronic processors. Making more aggressive assumptions about emerging optical technology, that factor might be as large as a million. And because electronic processors are power-limited, these improvements in energy efficiency will likely translate into corresponding improvements in speed.

Many of the concepts in analog optical computing are decades old. Some even predate silicon computers. Schemes for optical matrix multiplication, and even for optical neural networks, were first demonstrated in the 1970s. But this approach didn’t catch on. Will this time be different? Possibly, for three reasons.

First, deep learning is genuinely useful now, not just an academic curiosity. Second, we can’t rely on Moore’s Law alone to continue improving electronics. And finally, we have a new technology that was not available to earlier generations: integrated photonics. These factors suggest that optical neural networks will arrive for real this time—and the future of such computations may indeed be photonic.

About the Author

Ryan Hamerly is a senior scientist at NTT Research and a visiting scientist at MIT’s Quantum Photonics Laboratory.

https://neurosciencenews.com/vr-neuroplasticity-memory-learning-18826/

Virtual Reality Boosts Brain Rhythms Crucial For Neuroplasticity, Learning and Memory

FeaturedNeuroscienceNeurotech·June 28, 2021

Summary: Immersive virtual reality enhances theta and eta waves in the hippocampus, improving memory, learning and neuroplasticity.

Source: UCLA

A new discovery in rats shows that the brain responds differently in immersive virtual reality environments versus the real world. The finding could help scientists understand how the brain brings together sensory information from different sources to create a cohesive picture of the world around us. It could also pave the way for “virtual reality therapy” for learning and memory-related disorders ranging including ADHD, Autism, Alzheimer’s disease, epilepsy and depression.

Mayank Mehta, PhD, is the head of W. M. Keck Center for Neurophysics and a professor in the departments of physics, neurology, and electrical and computer engineering at UCLA. His laboratory studies a brain region called the hippocampus, which is a primary driver of learning and memory, including spatial navigation. To understand its role in learning and memory, the hippocampus has been extensively studied in rats as they perform spatial navigation tasks.

When rats walk around, neurons in this part of the brain synchronize their electrical activity at a rate of 8 pulses per second, or 8 Hz. This is a type of brain wave known as the “theta rhythm,” and it was discovered more than six decades ago.https://10dd28549eae220e4122a057519412ea.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

Disruptions to the theta rhythm also impair the rat’s learning and memory, including the ability to learn and remember a route through a maze. Conversely, a stronger theta rhythm seems to improve the brain’s ability to learn and retain sensory information.

Therefore, researchers have speculated that boosting theta waves could improve or restore learning and memory functions. But until now, nobody has been able to strengthen these brain waves.

“If that rhythm is so important, can we use a novel approach to make it stronger?” asks Dr. Mehta. “Can we retune it?”

Damage to neurons in the hippocampus can interfere with people’s perception of space – “why Alzheimer’s disease patients tend to get lost,” says Dr. Mehta. He says he suspected that the theta rhythm might play a role in this perception. To test that hypothesis, Dr. Mehta and his colleagues invented an immersive virtual reality environment for the rats that was far more immersive than commercially available VR for humans.

The VR allows the rats to see their own limbs and shadows, and eliminates certain unsettling sensations such as the delays between head movement and scene changes that can make people dizzy.https://10dd28549eae220e4122a057519412ea.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

“Our VR is so compelling,” Dr. Mehta says, “that the rats love to jump in and happily play games.”

To measure the rats’ brain rhythms, the researchers placed tiny electrodes, thinner than a human hair, into the brain among the neurons.

“It turns out that amazing things happen when the rat is in virtual reality,” says Dr. Mehta. “He goes to the virtual fountain and drinks water, takes a nap there, looks around and explores the space as if it is real.”

Remarkably, Dr. Mehta says, the theta rhythm becomes considerably stronger when the rats run in the virtual space in comparison to their natural environment

“We were blown away when we saw this huge effect of VR experience on theta rhythm enhancement,” he says.

This discovery suggests that the unique rhythm is an indicator of how the brain discerns whether an experience is real or simulated. For instance, as you walk toward a doorway, the input from your eyes will show the doorway getting larger. “How do I know I took a step and it’s not the wall coming at me?” Dr. Mehta says.https://10dd28549eae220e4122a057519412ea.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

Answer: The brain uses other information, such as the shift of balance from one foot to the other, the acceleration of your head through space, the relative changes in the positions of other stationary objects around you, and even the feeling of air moving against your face to decide that you are moving, not the wall.

On the other hand, a person “moving” through a virtual reality world would experience a very different set of stimuli.

“Our brain is constantly doing this, it’s checking all kinds of things,” Dr. Mehta says. The different theta rhythms, he says, may represent different ways that brain regions communicate with each other in the process of gathering all this information.

When they looked closer, Dr. Mehta’s team also discovered something else surprising. Neurons consist of a compact cell body and long tendrils, called dendrites, that snake out and form connections with other neurons. When the researchers measured activity in the cell body of a rat brain experiencing virtual reality, they found a different electrical rhythm compared with the rhythm in the dendrites. “That was really mind blowing,” Dr. Mehta said. “Two different parts of the neuron are going in their own rhythm.”

This shows a man in a VR gaming system
Therefore, researchers have speculated that boosting theta waves could improve or restore learning and memory functions. Image is in the public domain

https://10dd28549eae220e4122a057519412ea.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

The researchers dubbed this never-before-seen rhythm “eta.” It turned out this rhythm was not limited to the virtual reality environment: with extremely precise electrode placement, the researchers were then able to detect the new rhythm in rats walking around a real environment. Being in VR, however, strengthened the eta rhythm – something no other study in the past sixty years has been able to do so strongly, either using pharmacological tools or otherwise, according to Dr. Mehta.

https://www.bloomberg.com/opinion/articles/2021-06-28/crispr-gene-editing-breakthrough-by-intellia-ntla-is-a-big-deal

Crispr Gene-Editing Breakthrough Is a Big Deal. How Big?

For the first time, the gene-modifying technology was shown to work in the human body to treat disease, offering huge hope for further uses. Here’s what you need to know.By Sam FazeliJune 28, 2021, 8:53 AM PDT

The era of curing disease by tweaking DNA just got a lot closer.
The era of curing disease by tweaking DNA just got a lot closer. Photographer: Gregor Fischer/dpa/AP images

Sam Fazeli is senior pharmaceuticals analyst for Bloomberg Intelligence and director of research for EMEA.Read more opinionFollow @SamFazeli8 on Twitter

COMMENTS

7

LISTEN TO ARTICLE

6:32

SHARE THIS ARTICLE

ShareTweetPostEmail

In this article

NTLAINTELLIA THERAPE140.70USD+7.27+5.45%REGNREGENERON PHARM543.57USD+0.87+0.16%AONEONE – CLASS A9.99USD+0.01+0.10%

Sam Fazeli, a Bloomberg Opinion contributor who covers the pharmaceutical industry for Bloomberg Intelligence, answers questions after Intellia Therapeutics Inc. and Regeneron Pharmaceuticals Inc. released promising findings from the first human clinical trial using gene-editing Crispr technology in the body to treat a disease. Intellia shares surged more than 40% on the news. The conversation has been edited and condensed.

This is the first successful in-human gene editing with Crispr that we know of, and it seems to work really well. Is this a big deal?More fromNew Jersey Pipeline Ruling Shows Supreme Court PragmatismFacebook Is Big But Maybe Not a MonopolyJamie Dimon’s Paris Push Shows London’s City LimitsHit Hard By Drought, Farmers Get Creative

The answer is an emphatic yes. This new data, even though from just a handful of patients, shows not only that this new technology works in humans, but that it is also very safe, at least at the two doses that were tested in this trial. It’s important because it’s the first time that scientists have been able to modify DNA in a patient’s cells in a very specific way. This fascinating technology, very simply, is a molecular scissors (Cas9) that is led by a “guide RNA” to cut a specific part of DNA. Bacteria use this method to inactivate invading viruses. In Intellia’s trial, the system is used to deactivate the gene for a misfolded protein known as transthyretin, or TTR, which can build up in patients in some rare cases, leading to neurological and heart damage that is eventually fatal.

Why might it work better than other forms of genetic therapy that are already approved?

The beauty of what Intellia has done is that it has produced a highly specific “knockdown” of one disease gene, though more data is needed to prove that is 100% specific in humans. The other way that people have been tackling this type of disease involves either using antibodies targeted against a protein or some form of RNA-silencing or anti-sense methods, to stop the production of a protein. Indeed, there is already an approved drug from Alnylam (Onpattro) for the rare disease that Intellia is targeting. But these are drugs that need continuous therapy, whereas the gene-editing methods should be one and done.

This sort of editing is permanent — that’s probably a good thing in terms of having the desired effect on disease, but that also sounds scary. Is there any reason it might be problematic?

Not really, especially if scientists prove that it is indeed as specific as the technology is designed to be and has appeared to be so far in animal studies. But more data is needed to prove that. The problem is that the human body is very complex, and there will always be some risk that it works much better in some than others. Indeed, the data showed that the effects from the same dose varied among three different people. One good thing about the Intellia technology is that it uses lipid nanoparticles to deliver its payload — similar to those used in the mRNA vaccines from Pfizer-BioNTech and Moderna. This reduces the risks that are associated with using viruses, which most other gene therapies use.

Are there any safety concerns?

The data showed few side effects, which was amazing. But the company is going to go to a higher dose in the next set of patients — one that’s three times higher than the highest dose tested so far. This may increase the side effects. But the technology may be good enough at cutting the problem protein at the current dosage that Intellia may not need to take that risk. There could also be an immune response to the Cas9 protein the system needs to do its job correctly. If that is the case, it is unlikely to cause a problem by itself but may make delivering a second dose problematic, which means that the effect from the first needs to stick.

Speaking of which, what about durability. Is the effect going to last?

This is a key question. In animal studies, the effect has been maintained for at least 52 weeks. But in humans, the data stops at 28 days for now. So the company needs to show that the reductions in protein levels in humans are maintained. Otherwise, as noted above, the opportunity for redosing with the same drug is likely very limited. Other companies are working on different versions of the technology that may allow for redosing, but a lot more work needs to be done to see if they’re needed or that’s possible.

What’s the path forward for Intellia and for this sort of gene editing in general? When might it be more broadly available?

Durability and long-term safety will be important because the drug may cost hundreds of thousands of dollars or more. But Intellia also needs to show clinical benefit in the patients, though I suspect, given the direct correlation between TTR levels and disease, this is not a huge risk in this setting. On the path forward, these findings prove the feasibility of “gene knockout” in humans. Intellia has other programs using this technique to treat other diseases, including a swelling disorder called hereditary angioedema. The next step is whether scientists can literally “edit” a gene to correct a mutation. Intellia and several other gene-editing companies have such programs under way. The critical questions are: Can real gene editing be done at high efficiency in patients, and can this be done in any organ and not just those that are relatively easily accessible, such as the liver? The other issue is whether the system can be used to correct multiple mutations because not all diseases are caused by just one.

Is Crispr definitely the way forward in gene editing, or could we do even better?Opinion. Data. More Data.Get the most important Bloomberg Opinion pieces in one email.EmailSign UpBy submitting my information, I agree to the Privacy Policy and Terms of Service and to receive offers and promotions from Bloomberg.

There are other technologies in development, such as TALENs, Sangamo’s zinc-finger nucleases, and site-specific nucleases. But in the end, they all try to achieve the same thing. Intellia’s data shows that, at least in some settings where the target organ is easy to access, this can be done inside a human being. This opens the door for all these technologies to be tested. It will take additional study and in-human data to show any advantages or problems.

Gene editing comes with some ethical concerns — how do we make sure it’s used in the right way?

That’s a big societal issue. We have seen it play out when it came to cloning Dolly the sheep. In the end, it is the job of science to come up with new technologies and for regulators and governments to ensure that they are applied for the right reasons, in the right way and in the right context.

You touched on the likely high cost of the drug above. Other gene therapies have been extremely expensive, with companies invoking the “one and done” aspect of these treatments as a justification. Will this follow the same path?

I think this is where we will have to have more discussion and find better ways of encouraging innovation without making treatment unaffordable. I have heard of buying clubs, for example, for Novartis’s gene therapy Zolgensma for spinal muscular atrophy, a rare but devastating disease for some babies. But at a cost of about $2 million, this is extremely difficult to finance. If gene editing does eventually deliver on its promise, given the number of single-gene diseases, it could create a significant cost burden on society that is still reeling from the economic effects of the pandemic.

https://www.dailymail.co.uk/health/article-9734041/Can-staying-night-beat-insomnia-Experts-believe-reset-body-clock.html


Can staying up all night beat your insomnia? It sounds bizarre, but experts believe it can ‘reset’ your body clock and help to end sleeplessness… and the depression so often linked to it

By LOIS ROGERS FOR THE DAILY MAIL

PUBLISHED: 17:31 EDT, 28 June 2021 | UPDATED: 19:38 EDT, 28 June 2021

81shares10View comments

After a harrowing childhood, a troubled first marriage and the loss of two of her children, Maz Rhodes suffered for years with depression, as well as insomnia.

In fact, the two have strong links — according to one major study by psychiatrists at Bristol University, about three quarters of patients with depression experience insomnia to varying degrees, with more women affected than men.

‘Every day, no matter how tired I was, I would be wide-awake at 11pm,’ says Maz, a 64-year-old grandmother from Barton-upon-Humber in Lincolnshire. 

‘You end up not wanting to go to bed because of the fear of not sleeping.

‘I would stay up until two, not get to sleep until four and still have to get up first thing. It’s not a way to live.’

Over the years she had been prescribed different medications at different doses, with little effect, she says.According to one major study by psychiatrists at Bristol University, about three quarters of patients with depression experience insomnia to varying degrees, with more women affected than men+2

According to one major study by psychiatrists at Bristol University, about three quarters of patients with depression experience insomnia to varying degrees, with more women affected than men

But she has now benefited from a new treatment that has dramatically improved her sleep and significantly reduced her symptoms of depression.

The treatment was chronotherapy, its use pioneered in Britain by Professor David Veale, a psychiatrist at the Institute of Psychiatry at the Maudsley Hospital in South London.

Chronotherapy involves a ‘reset’ of the body clock, which is said to improve sleep and, as a result, mood.

The process itself takes five days. The first night the patient stays up all night; over the following days they go to bed early, but progressively later — the next day they sleep between 5pm and 1am; then 7pm and 3am, and on the fourth day, 9pm and 5am.

On the fifth and subsequent nights they sleep between 11pm and 7am.

For two hours before their early bed times they wear light-filtering amber goggles — this stimulates the release of the sleep hormone melatonin by reducing the amount of daylight perceived by the brain. And when they wake up they have to expose their eyes and face to bright light from a light box.Chronotherapy involves a 'reset' of the body clock, which is said to improve sleep and, as a result, mood+2

Chronotherapy involves a ‘reset’ of the body clock, which is said to improve sleep and, as a result, mood

An analysis of 16 studies published in the Journal of Affective Disorders in 2019 suggests the treatment can have an immediate impact (within days) — the researchers concluded it’s ‘generally superior to other therapies such as psychotherapy, anti-depressants, exercise or light therapy’ for treating depression in the short-term.

Professor Veale suggests the immediacy of the effect is significant, as ‘standard treatments for depression — medication, talking therapies and other types of behaviour change may take five to six weeks to get a response’.

He adds: ‘I became interested in this because I was looking for a more rapid treatment for depression — something that was acceptable to patients and not just another drug.’

While it’s not clear how chronotherapy, which was first investigated as a treatment for depression 40 years ago, works, ‘it seems to re-synchronise the body’s circadian rhythm with the sun the moon and daylight’, says Professor Veale.

‘And people with depression are often ‘misaligned’ in this way, waking up at night and going to bed early and feeling tired.’

He says chronotherapy is like resetting your computer using the ‘control, alt, delete’ mechanism, which lets you close down faulty programmes — ‘it helps people sleep better and this then helps their mood’.

In a small, as yet unpublished trial of 60 patients at the Mauds-ley in 2018, where half underwent a five-day chronotherapy course, 50 per cent of these patients’ symptoms improved in a week, and 70 per cent within six months.

‘We wanted to see if our patients were able to be supported to stay up at night [the first night is spent at hospital, then the patient goes home] and stick with the timetable and we think it’s promising,’ said Professor Veale.

Chronotherapy is not recognised as a treatment by the National Institute for Health and Care Excellence and Dr Gertrude Seneviratne, spokeswoman for the Royal College of Psychiatrists, told Good Health that while there is ‘some evidence of potential benefits of chronotherapy for those with severe depression, more extensive trials are needed to make a more definitive judgment around its value’.

But Professor Veale suggests getting funding for such trials is a stumbling block. ‘Chronotherapy works very well but people aren’t interested in it because it’s not patentable and you can’t make money out of it.

‘Also, it’s not part of the culture — psychiatrists do medication and psychologists do talking therapies.

‘There’s also scepticism that patients can do it and it’s difficult to organise — it’s easier if you’re in hospital and you have a nurse to help you stay up and go to bed at different times but it costs £5,000 as a private inpatient.

‘It’s more difficult for outpatients because someone has to stay up with them.

‘I’ve helped one or two people do this at home but, for most people, it’s not realistic to do it on their own,’ he says, adding that those with depressive symptoms need proper supervision.

RELATED ARTICLES

SHARE THIS ARTICLE

Share

‘That’s why we are trying to build an evidence base for the NHS.’

He hopes to set up a charity that would help with support and advice. Chronotherapy is available at some centres in Europe but there have been similar funding problems.

‘When I first saw patients with severe depression talking and being normal after this treatment, I thought doctors would jump at the opportunity to treat depression, but no,’ says Professor Anna Wirz-Justice, from the Centre for Chronobiology at the University of Basel in Switzerland, who has been treating patients with the therapy for 40 years.

‘But it can’t be patented so there is no money in it and it’s difficult and expensive to do the sort of trials needed to prove it works.’

Maz came across Professor Veale’s trial when researching online for alternative treatments. Although she has successfully raised her three daughters Julie aged 40, Penny aged 37 and Belinda aged 31, to adulthood and has four young grandchildren, she has struggled with depression and chronic insomnia that other treatments had hardly touched.

Her story is a traumatic one — after Maz’s troubled childhood, her eldest son Damian died of a rare cancer when he was 11.

Storing up trouble 

What’s the best way to store your food? This week: Watermelon

Keeping your watermelon out of the fridge can increase the amount of cell-protective antioxidants found in the fruit.

According to research from the U.S. Department of Agriculture, storing the fruit at 21c (or room temperature) leads to a 40 per cent increase in lycopene, a compound linked to heart health, and also boosts levels of beta-carotene (which helps with vision) by at least 50 per cent. Both of these compounds contribute to the bright colour of the fruit.

Storing watermelons in cooler temperatures (at 13c or lower) didn’t improve their antioxidant content.

The researchers suggested that the increased antioxidant levels in the warmer fruit were due to the ripening process. However, once you have cut the fruit open, you should refrigerate it, otherwise it will spoil faster.

Her second son Ian died two hours after being born — ‘they brought him to me to hold and that was it. I had to let him go so they could do a post-mortem. He weighed nine pounds six, but they said his lungs were too small.

‘I never really found out why he had died, never understood what was wrong with him and I never got closure’.

With the backing of her third husband, Ian Purcell, 52, Maz approached Professor Veale. ‘I went down to London for an interview and was accepted straight away. I went back to the Maudsley for the first night of treatment. There was me, another man and the sitter [a nurse]. I took my knitting and the book but we talked all through the night and at 7am we were released.

‘My husband picked me up off the train at 3pm and I went to bed at 5pm and followed the routine for the next four days.’

‘My girls noticed I had much more life. After this, I was the first to bed.’ Chronotherapy has not ‘cured’ her: she remains on the anti-depressant sertraline and still sometimes struggles with her sleep routine.

‘It’s like being on a diet: once you nibble on the biscuit it’s difficult to get back into it, but I do feel I can control my sleep now and I know what to do when it slips. I now go to bed at 11:30pm instead of 1.30 or 2am and I know I can actually go to sleep.’

https://www.realsimple.com/health/mind-mood/emotional-health/emotional-hangover-after-therapy

The Real (and Very Normal) Reason You’re So Exhausted After Therapy

This is your post-therapy emotional hangover, explained.By Elizabeth YukoJune 28, 2021ADVERTISEMENTSaveFBTweetMore00:0900:49https://imasdk.googleapis.com/js/core/bridge3.469.0_en.html#goog_1129385108You might like×The Real (and Very Normal) Reason You’re So Exhausted After TherapyWe Need to Talk About Little “t” TraumaTips to Find Affordable TherapyIt’s Time to Get Excited About Your Clothes Again After a Year in Sweatpants-Here’s HowHow to Navigate a New Job When You’re Fully Remote, According to Career ExpertsHow to Teach Your Kids About Credit6 Reasons You Need to Make a Will Now6 Fun Road Trip Games for AdultsAsk a Beauty Editor: What’s the Most Effective Way to Apply Sunscreen?How to Clean Silicone Beauty Tools and Personal DevicesTimesaving Tips From Professional House CleanersWhat Is a Toxic Money Mindset-and How Do You Get Rid of It?Bored of Your Fitness Routine? Take Up Rollerblading-the Fun and Healthy Activity That’s M7 Things First-Time Home Buyers Wish They Knew5 Things I Wish I Knew Before Buying (and Restoring) an Old House7 Ways to Break a Sugar Addiction and Curb Cravings for Good5 Candle Mistakes That Are Cutting Down Your Precious Burn Time7 Secrets Real Estate Pros Know to Get the Best Deal on a HomeHow to Choose the Right Facial for Your Skin Type and NeedsRestorative Yoga Is All About Relieving Stress With Gentle Stretches—Here Are 6 Beginner PWays the Pandemic Has Changed Travel-Possibly ForeverHere’s How EFT Tapping WorksGreat Recipes From the July Issue of Real SimpleThis Is Why Bug Bites Itch So Darn Much-and Your 8 Best Options for ReliefHow to Save $100+ Per Week By Shopping With a Grocery ListWe Tried the Creamy Lemonade Recipe Trending on TikTok-and We Have a Few ThoughtsThe Dangerous Trap of Toxic Productivity-and How to Break the CycleHydrangea Arranging Ideas and Tricks for a Long-Lasting (and Lovely) BouquetHow to Save Money on Solar Power10 Flowering Vines to Add Beauty (and Privacy) to Your BackyardHow a ‘Mini Retirement’ Sabbatical Can Be Good for the Soul-and Your CareerWays to Clean Smarter, Not HarderWhat Does Eating a Balanced Diet Actually Mean?Green With Envy? Here’s How to Stop Being Jealous, According to Psych ExpertsA Guide to the Top Dating Apps-and Why You Might Want to Consider Going PremiumWays to Whiten Laundry Without BleachEverything You Need to Know to Negotiate AlimonyWhat to Eat for a Longer Life and Lasting HealthYes, There’s a Right Way to Store Sunscreen (and Throw It Out When It Expires)-Here’s HowThe Ultimate Guide to Eco-Friendly Paint Projects8 Spectacular U.S. Campgrounds to Explore This Summer With Family, Friends, or SoloWhat Is Geographic Arbitrage-and How Can You Pull it Off?Think Rich, Get Rich: How Critical Thinking Can Increase Wealth9 Eye-Opening Inaccuracies You Probably Believe About Eating Eggs5 Tips for Summer Backyard Entertaining on a BudgetWhat You Need to Know About the New Study on Toxic Chemicals in MakeupHow to Talk About Sex With Your LGBTQIA+ ChildHow to Save Money on College TuitionThe 5 Best Cities to Lower Your Cost of Living Without Sacrificing LifestyleThe Pandemic Has Impacted Our Friendships-and Here’s Why That’s OK

As someone with years of therapy under her belt, I have firsthand experience with the range of feelings that can result from a session. Sometimes I’ll leave feeling light as a feather-as if a major weight has been lifted from my chest-after having a certain realization, or, with help from my therapist, learning to reframe a previously distressing situation. Knowing that I have a better grasp on some aspect of my mental health can even go as far as being reenergizing. 

But, at least for me, these sessions are the exception, not the norm. More often than not, a deep fatigue-an emotional hangover-will set in after an appointment. Sometimes it’s immediate, while other times it’ll hit me a few hours later, as if I had taken shots of NyQuil in the middle of the day. Assuming this meant I was even more broken than I’d originally thought, I brought it up with my therapist when we first started working together, and she assured me that both of my post-therapy responses (feeling either energized or exhausted) are completely normal. It may be normal, but that doesn’t explain why it happens. Here, two mental health professionals help demystify the emotional (and sometimes even physical) exhaustion that can hit after a session of therapy.

RELATED: 5 Tips to Find Affordable Therapy 

Related Items

Stress can make us tired.

Chances are, the topics you discuss in therapy are ones that cause you stress in your everyday life. If you’ve ever gone to a physician because you weren’t feeling well, but the reason why isn’t entirely clear, they probably asked you about your stress levels, and explained that stress symptoms can include everything from exhaustion and insomnia, to headaches and dizziness. And although being stressed doesn’t necessarily mean a person is depressed, one of the common symptoms of depression includes changes in how much you sleep-so if you’re going to therapy to deal with depression (at least in part), it can contribute to the feeling of fatigue.

What’s behind your post-therapy exhaustion?

Although everyone who decides to work with a therapist does so for their own reasons, it would be difficult to find someone (in or out of therapy) who doesn’t experience some type of stress. We already know that for many people, the stress response is physically exhausting, so it makes sense that we can get tired after discussing something stressful in a session, according to Adam L. Fried, PhD, a clinical psychologist and director of the clinical psychology program at Midwestern University in Glendale, Ariz.

“Talking about something that has a high emotional impact can be extremely stressful and leave us feeling physically spent,” he says. “Some people who have been in highly stressful situations-like taking a really important test or exam, being evaluated, or having a tense meeting with your boss for an annual review-have experienced a similar physical exhaustion after the stressful situation ended.” 

Often in those situations, Fried says, people are surprised at how suddenly exhaustion can set in once the stress-inducing event is over-something that can also happen after therapy. 

“Talk therapy is often a release, and many are releasing things they have stored up for years,” he explains. “That process of releasing and sharing with another person can be emotionally exhausting, which can also assume the form of physical fatigue. I think for some, they don’t realize the energy expended on keeping themselves going with this level of stress; it’s only after they ‘unload’ some of what they’ve been carrying that they realize how exhausting it’s truly been for them.” 

RELATED: 7 Different Types of Therapy-and How to Choose the Right One for You 

It doesn’t happy to everyone or every time.

It’s important to note that while some people may leave a therapy appointment so fatigued they’re unable to work for the rest of the day, that’s not the case for everyone. As I know from personal experience, it isn’t necessarily about being a particular type of person that makes you more or less susceptible to post-therapy sleepiness-it can also depend on what you discuss in a particular session. 

“Not all topics may have the same emotional impact,” Fried explains. “There may be different times during the therapy process where people feel more or less exhausted, depending on what they’re talking about. For example, it may be several sessions into a therapeutic relationship before you start discussing a particular traumatic event; these sessions may feel more exhausting than others.” In addition to that, Fried says that someone’s energy levels after therapy may also fluctuate depending on what else is happening in their life. For instance, someone working longer hours on an important project may end up experiencing more fatigue after their therapy session than usual. 

Another reason some may need a post-therapy nap, according to Lise LeBlanc, a registered psychotherapist specializing in trauma and author of the PTSD Guide, is because it’s part of a therapist’s job to help stir up and shift painful mental and emotional patterns. 

“I would compare it to turning a snow globe upside down and shaking it,” she says. “This upheaval can be draining for anyone, but for introverts or those who experience social anxiety, an hour of intense interaction with a therapist-talking about stressors, traumatic experiences, and difficult emotions-can be especially exhausting.” 

On the other hand, LeBlanc explains, extroverts who are energized by social interaction and/or enjoy processing their thoughts and emotions, may not experience this same level of exhaustion after an appointment, or at least not as often.

The difference between emotional exhaustion and an emotional hangover

Sometimes people will refer to the onset of fatigue after a therapy session as “emotional exhaustion,” but that may not be the most accurate description of what they’re experiencing. 

According to Fried, emotional exhaustion happens when stress levels are so high that someone feels constantly drained, overwhelmed, fatigued and irritable. 

“When you regularly experience stress levels that stretch your mental and emotional resources, in time, you become emotionally exhausted and depleted,” LeBlanc explains. “You can no longer properly recharge and so you wake up in the morning feeling just as exhausted as you did when you collapsed into bed the night before.” 

An “emotional hangover,” on the other hand, is the feeling of being emotionally depleted after an emotional interaction, like a stressful conversation with your boss, an argument with your partner, or a therapy session, says LeBlanc. “The symptoms of emotional exhaustion and an emotional hangover can be quite similar and include things like feeling emotionally unstable, drained, irritable, mentally foggy, and having physical pain,” she explains. “However, where emotional exhaustion results from prolonged accumulations of stress, an emotional hangover is the result of expending too much emotional energy in a short period of time.” 

It can also get confusing, Fried says, because some people go to therapy in order to deal with emotional exhaustion. “The exhaustion that some people talk about after having an intense therapy session is usually somewhat different because they often don’t feel as irritable and overwhelmed, as is often the case with emotional exhaustion,” he adds.

How to recover after a therapy session

If you find yourself exhausted after a therapy session, both Fried and LeBlanc recommend taking the time to care for yourself. Specifically, Fried suggests activities that can help you feel more focused, less stressed, and more energized-like a guided meditation, walking, or even just sitting quietly outside with no distractions. 

“I usually ask clients to schedule an extra 15 to 20 minutes after their session to have a short nap or meditation,” LeBlanc says. “After stirring up difficult thoughts and emotions, we need to give ourselves time to process and settle, otherwise, there is no way for the mind to consolidate insights, shift mental patterns, and release emotions.” It may help, if you have the option, to schedule therapy sessions purposely at a time when you know you’ll have at least a short window between that and your next responsibility. 

Unfortunately, not everyone has the option of taking as much time as they need to recover after therapy. If that’s the case, LeBlanc says to “do so with self-compassion.” Take five minutes to sit quietly, breathe, or take a brief walk around the block. 

Finally, keep in mind what my therapist initially told me when I brought up my post-session exhaustion: It’s normal. “It may help to know that this isn’t necessarily an unusual phenomenon,” Fried says, “and that it’s often a sign that what you’re working on or what you shared was something of significant emotional impact.”

https://medicalxpress.com/news/2021-06-explores-perception-internal-bodily-concept.html


Study explores how the perception of internal bodily signals influences the concept of self

by Ingrid Fadelli , Medical Xpress

Study explores how the perception of internal bodily signals influences the concept of self
Interoceptive constraints on each dimension of the concept of self. The colored arrows show the known influences of specific interoceptive signals on specific facets of the self-concept. Truncated lines with no terminal arrow indicate hypothetical links between interoception and self-concept that are yet to be investigated. Credit: Monti et al.

In contrast with other animal species on Earth, over the course of their life, humans can develop a fairly clear idea of who they are as individuals and what sets them apart from others. This abstract concept of self is known to be fragmented and fuzzy in individuals with certain psychiatric disorders, including schizophrenia, borderline personality disorder and dissociative identity disorder.https://04382e1dea6306d21e040b24326a6636.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

Researchers at the Aglioti Lab, part of Sapienza University of Rome recently authored a review paper examining experimental evidence suggesting that the birth, maintenance and loss of this abstract concept of self is deeply tied to what is known as interoception. This refers to an individual’s sense of his/her internal physiological signals.

“A couple of years ago, we discovered a new bodily illusion here at the Aglioti Lab,” Alessandro Monti, one of the researchers who carried out the study, told Medical Xpress. “This ’embreathment’ illusion, as we called it, suggests that your concept of yourself (i.e., who you think you are) is partly shaped by feelings that come from your viscera, particularly from the heart and the lungs. We discussed our work with Prof. Anna Borghi, a good friend and colleague of ours.”

When Monti and his colleagues started discussing the illusion that they observed with Prof. Borghi, they realized that the concept of self has several dimensions, ranging from mundane and material to social and spiritual. They thus decided to investigate the illusion more in depth and try to better understand what it suggested about people’s concept of self.

“Together with Prof. Borghi, we explored a number of questions, such as: Does the perception of internal bodily signals (also known as ‘interoception’) influence all these dimensions of the self, or just some of them? Do specific organs influence specific facets of the self-concept? And what happens when this perception goes awry? In the end, we decided to write a review to answer these questions and frame our experimental results in a bigger picture,” Monti said.

The starting point for the researchers’ paper was a classical partition of the self delineated by William James. Essentially, in his book Principles of Psychology, James delineates four distinct layers of the self, namely the material self (the concept of oneself as a material being), the social self (the concept of oneself as a member of society), the spiritual self (the concept of oneself as a moral person), and the pure Ego (the concept of oneself as a thinking and acting subject).https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-0536483524803400&output=html&h=188&slotname=7099578867&adk=4039075515&adf=1873531024&pi=t.ma~as.7099578867&w=750&fwrn=4&lmt=1624943491&rafmt=11&psa=1&format=750×188&url=https%3A%2F%2Fmedicalxpress.com%2Fnews%2F2021-06-explores-perception-internal-bodily-concept.html&flash=0&wgl=1&uach=WyJtYWNPUyIsIjEwXzExXzYiLCJ4ODYiLCIiLCI5MS4wLjQ0NzIuMTE0IixbXSxudWxsLG51bGwsbnVsbF0.&tt_state=W3siaXNzdWVyT3JpZ2luIjoiaHR0cHM6Ly9hZHNlcnZpY2UuZ29vZ2xlLmNvbSIsInN0YXRlIjozfSx7Imlzc3Vlck9yaWdpbiI6Imh0dHBzOi8vYXR0ZXN0YXRpb24uYW5kcm9pZC5jb20iLCJzdGF0ZSI6N31d&dt=1624943490342&bpp=13&bdt=3617&idt=739&shv=r20210628&ptt=9&saldr=aa&abxe=1&cookie=ID%3D159a91dc538ead62-22cf61eea6c20048%3AT%3D1596518137%3AR%3AS%3DALNI_Mbw-dfbnrOLWYH3Rv2C7X_TIML9VA&correlator=2311857031614&frm=20&pv=2&ga_vid=1534776174.1526672041&ga_sid=1624943491&ga_hid=658217661&ga_fc=0&ga_wpids=UA-73855-15&rplot=4&u_tz=-420&u_his=1&u_java=0&u_h=1050&u_w=1680&u_ah=980&u_aw=1680&u_cd=24&u_nplug=3&u_nmime=4&adx=335&ady=2099&biw=1680&bih=880&scr_x=0&scr_y=0&eid=42530672%2C21067496%2C31061217%2C44743203&oid=3&pvsid=2226764584065621&pem=424&ref=https%3A%2F%2Fnews.google.com%2F&eae=0&fc=896&brdim=0%2C23%2C0%2C23%2C1680%2C23%2C1680%2C960%2C1680%2C880&vis=1&rsz=%7C%7CpeEbr%7C&abl=CS&pfx=0&fu=128&bc=31&ifi=1&uci=a!1&btvi=1&fsb=1&xpc=MexheEITpp&p=https%3A//medicalxpress.com&dtd=829

“We reviewed a series of experiments linking participants’ interoception, measured through questionnaires and physiological recordings, to these four layers of their self-concept,” Monti said. “We included studies both on healthy participants and on psychiatric patients with a distorted or fragmented sense of self—such as those who experience depersonalisation, schizophrenia or eating disorders.”

Both for studies involving healthy subjects and those focusing on psychiatric patients, the researchers examined whether one’s concept of self was more likely to include features that were associated with stronger physiological signals. The results of their analyses suggest that the most intimate and invariable features of people’s concept of self were those that were, quite literally, closest to the heart (i.e., those most influenced by interoceptive signals). In other words, people’s abstract concept of self appears to be closely influenced by their perception of signals originating from their body. More specifically, past studies suggest that those with a stronger and more stable concept of self are more entuned with their inner bodily signals, particularly their heartbeat and breath, and are less prone to sensory illusions.

“While the concept of self is related also to transient sensory and motor experiences, we claim that it is the cyclic physiology of the viscera that provides the self-concept with a firm foundation, contributing to its stability and sanity over time by making it less permeable to external influences,” Monti said. “We argue that this stabilizing role of interoception on the self-concept is not limited to the material self, but also extends to the social and spiritual self.”

The overreaching conclusion of the recent review paper authored by Monti and his colleagues is that humans’ abstract concept of self is not merely embodied; it is deeply embodied. In the future, this observation could have important implications for the development of treatment strategies for psychiatric patients with a fragmented or hindered concept of self.

“We are currently extending our research in two directions,” Monti said. “On the one hand, we believe that the link between interoception and disorders in which the self-concept is fragmented or loose deserves further attention. Thus, we are collaborating with other scientists and clinicians to assess the sense of self of patients suffering from a variety of medical and psychological conditions, to see whether interoceptive training may help them regain a more stable picture of themselves.”

In addition to exploring the link between interoception and a fragmented concept of self, the recent review paper highlighted a gap in existing literature that could be filled by future studies. More specifically, Monti and his colleagues found that currently little is known about the role of the gut (i.e., the gastrointestinal tract) in defining people’s sense of self.

“We are working hard to close this gap,” Monti added. “In fact, we have recently posted a preprint with exciting new data that support the idea that the stomach and the intestine are also influential markers of our sense of self.”


Explore furtherStudy shows brain differences in interpreting physical signals in mental health disorders


More information: The inside of me: interoceptive constraints on the concept of self in neuroscience and clinical psychology. Psychological Research(2021). DOI: 10.1007/s00426-021-01477-7.

The “embreathment” illusion highlights the role of breathing in corporeal awareness. Journal of Neurophysiology(2020). DOI: 10.1152/jn.00617.2019.

Gut markers of bodily self-consciousness. bioRxiv. DOI: 10.1101/2021.03.05.434072.Journal information:Psychological Research Journal of Neurophysiology

https://www.psychologytoday.com/us/blog/the-new-brain/202106/how-does-muscle-memory-work

R. Douglas Fields Ph.D.

The New Brain

How Does “Muscle Memory” Work?

Why taking breaks speeds learning.

Posted June 24, 2021 |  Reviewed by Jessica Schrader

KEY POINTS

  • Muscle memory refers to developing a new skill through practice.
  • A recent study shows that interspersing short breaks between repetitions encodes skill memories better than back-to-back practice sessions.
  • The study also found that instant replay between practice sessions flashes extremely rapidly through the brain. 

Building “muscle memory,” that is, developing a new skill through practice, does not work the way you probably believe, according to a new study published in the June 8 issue of Cell Reports. You’ve no doubt had the frustrating experience in learning a new skill, that you continue to flub up even though you make repeated attempts over and over. Yet, if you put the challenge aside for a bit and come back to it later, you find that you are now much more proficient. In learning a new skill, it turns out that the breaks between repetitions are where the action is. By monitoring how neural activity in the brain changes during learning a new skill, researchers report that mental “instant replay” after each performance is critical to perfecting the skill. Moreover, these mental flashbacks have been missed by scientists previously for a surprising reason.

You can practice a difficult piece of music at the piano over and over again, but the enduring memory to execute that performance is not laid down while you are tickling the ivories. Learning begins after you lift your fingers off the keys and take five. New research monitoring brain activity reveals that the same neural networks that are coordinately activated during a practice session automatically replay the same sequence mentally during the breaks between repetitions. This accounts for why interspersing short breaks between repetitions encodes skill memories much better than doggedly repeating the same number of practice sessions back-to-back.

Why, then, is this essential post-performance mental rehearsal not obvious to us as we struggle to learn a new skill? It turns out that the “mental instant replay” flashes through the brain at warp speed—20 times faster than the original experience.

In this new study, the experimental participants were tasked with typing a sequence of five numbers displayed on a computer screen as quickly as possible. To make the task more challenging, the test subjects were required to type the sequence of numbers with their left hand if they were right-handed (and vice versa), using their pinky to type the first number, their ring finger to type the second, their middle finger to type the third, and their index finger to type the fourth number. Using thumbs was not allowed so the pinky typed the fifth digit. This task is not unlike a guitarist learning to accurately manipulate the correct fingers on his/her left hand as they learn to play a new guitar lead solo. The participants were given 10-second trials, interspersed with 10-second rests, and asked to type the sequence of numbers as fast and as accurately as possible during each 10-second practice period. Naturally, their skill improved with each trial so that by the 36th round, they could tap out the sequence correctly in a flash and with much less conscious effort than at first.

The researchers used magnetoencephalography (MEG) to monitor brain activity. MEG works much like the more familiar EEG that detects the brain’s electrical activity through an array of electrodes on the scalp, but MEG detects the magnetic component of the brain’s electromagnetic responses instead of the electrical component. Magnetic fields pass through the brain and skull with much less distortion than electric fields, which follow the path of least resistance through tissue, so MEG enables researchers to better pinpoint where neural activity is flowing inside the brain. In this study, 275 magnetic sensors positioned all over the scalp enabled researchers to identify the location of electrically active circuits in the brain with great precision.article continues after advertisement

Without any fancy instrumentation, a tell-tale clue in the data tipped off researchers about how the brain was learning this new skill. By simply measuring how much the performance improved during the 36 training sessions, it was clear that skill increased in a step-wise manner with each repetition getting a bit better than the previous one. That is, speed and accuracy did not increase during each 10-second practice session; it ratcheted up after each 10-second rest period, so that performance was incrementally better in the next trial session. Clearly, the improvement was the result of whatever the brain was doing during the rest periods.

 Buch et al., June 8, 2021, Cell Reports--Open Access

Performance increases after rest intervals, not during practice sessions.Source: Buch et al., June 8, 2021, Cell Reports–Open Access

The full neural activity sequence detected by MEG during each trial session was replayed in the brain during each 10-second rest period, but the instant replays zipped through the brain 20 times faster than in the live performance. The neural activity screamed through parts of the brain known to be important for memory and motor skill, notably in the hippocampus and sensorimotor cortex, among other regions.

By comparing how improvement in this skill correlated with the speed of the mental flashbacks during rest periods, the researchers found that as the rate of mental replays increased over the 36 training sessions, so too did the actual skill of typing out the sequence improve proportionately. No need for a recital—researchers could predict how well a subject could perform simply by measuring how rapidly the mental instant replay flowed through their brain.

The fact that learning is improved by providing rest intervals as opposed to slogging away with the same number of practices back-to-back has been well established. So too is the importance of sleep in improving enduring memories and in learning new skills well documented. In animal experiments, it has been shown that rats learning to negotiate a new maze replay the experience in their brain activity during sleep, and this consolidates the experience into memory. But now we know that a similar process takes place very rapidly while we are awake in the intervals between practicing. This new study provides a new understanding at the level of brain function of why taking breaks for off-line mental processing is necessary for learning, and it adds the important finding that instant replay between practice sessions flashes extremely rapidly through the brain. All of this was easily missed previously, because mental replay at 20 times normal speed is much too fast for mental imagery to play out as a conscious mental rehearsal.

So “take it again from the top,” but really, it is chilling between bars that lays down the tracks in your brain.