https://phys.org/news/2020-04-bad.html

Why bad smells stick around and how to eliminate them

Why bad smells stick around and how to eliminate them
The key to controlling odour is knowing what stage you want to target. Credit: Shutterstock

Ever wondered why something smells the way it does—good or bad—and why some odors just hang around no matter what you do to get rid of them?

Dr. Ruth Fisher, lecturer in the UNSW School of Civil and Environmental Engineering, has the answers and offers tips for controlling those bad smells.

She works in the UNSW odor Laboratory and much of her research has been into the production and emission of complex odors from biosolids at .

“Biosolids are the solids remaining after the wastewater treatment process. They’re pretty smelly, but they have lower  and after processing aren’t hazardous,” Dr. Fisher said.

“Biosolids are a really good source of organic matter and nutrients. So, in Australia we apply a lot of biosolids to land.

“Our research aims to understand what the smells are, how they’re formed, how we can vary the process to make them less smelly and how we can best communicate with communities who live near these treatment plants.”

Dr. Fisher said the same principles of odor control applied whether it was in an industrial or residential setting.

“We want to understand the state of the odor through its different phases because that helps us to design process changes and informs the best treatments to get rid of it, or reduce it,” she said.

odor is about perception

Dr. Fisher said odor was a person’s sensorial response when breathing in a volatile chemical compound.

“Something in the air binds to receptors in our nose and the odor is the signal that is sent to our brain,” she said.

“odors are made up of lots of different compounds which can produce slightly different signals. Typically, a compound that elicits a response is called an odorant.

“But one of the really intriguing things that relates to odor perception is the great variation in how different people detect and perceive odors; for example, a sommelier or perfumer would be very good at isolating different characters in odors, while other people might be highly sensitive to odors or not at all.

“Then, some people might lack receptors for certain types of odorants: that’s called having an anosmia.”

Dr. Fisher said there were also different ways to describe odors—by the character (for example, chocolate or pine), hedonic tone (the lower the tone the more unpleasant the odor), intensity (the strength of the odor), and the concentration (the more diluted, the more difficult to detect).

Control odor at each stage

Dr. Fisher said the key to minimising odor was knowing what stage of the transmission process you are targeting, because different methods can be applied during the odor’s development.

“Different interventions, for example, air freshener products, target different aspects of the process,” she said.

“The first stage is formation: how the odorant is being produced. The easiest way to stop an odor is to prevent it from being formed.

“In industrial settings, typically this involves process changes to limit microbial growth because a lot of putrid odors are due to microbes, for example, volatile fatty acids that smell like feet are formed from the microbial degradation of carbohydrates.”

Dr. Fisher said the second step was controlling how the odorant was emitted—for example, this could be as simple as closing doors in an industrial setting when odorous products were being unloaded, such as garbage.

“We can also remove odors using processes such as activated carbon filters, specialized scrubbers or other filtration methods, which are all large-scale industrial processes,” she said.

“The third method in minimizing odor is dilution or transmission. In an industrial setting, for example, you can vent odors through very tall stacks—this is done with sewer pipes in urban areas because when the odor reaches the ground it’s been diluted and its concentration is low so you’re less likely to smell it.

“Having a buffer zone around industrial sites, such as landfill, is another common method, so people cannot live too close to these sources of odor.”

Dr. Fisher said the fourth and last stage of minimizing odor was controlling how people perceived it.

“So, you’ve tried to reduce the odor with the other methods, but how do you stop your nose from detecting it—how do you trick your sense of smell or mask the odor?” she said.

“This method is also commonly used in industry, for example, big sites like landfills or wastewater treatment plants have nozzles along their boundaries which spray masking agents: scents with a nice hedonic tone, like floral or citrus.

“However, these masking agents don’t always work; sometimes, they result in strange, more unpleasant combinations of odors, or the masking agent itself can annoy people—if there’s a particularly cloying floral scent, for example.”

Minimise odor in your environment

Dr. Fisher said the same odor control methods used in industrial settings could also apply in the home.

“So, following the process of how odors are produced and transmitted, the first step is to prevent the formation of the odor: for example, restrict the growth of mold in your home through appropriate ventilation, regular disinfecting of surfaces and minimizing moisture—don’t leave your carpet damp or wet for long—because when odorant compounds form, they’re really hard to get rid of,” she said.

“Be selective about what products you bring into your home. For example, don’t cook smelly food inside the house—do it outside if possible.”

Dr. Fisher said the next stage in controlling odor at home was the emission and transmission phases: dilute the odor concentration, for example, through improving ventilation or using a fan with activated carbon filtration.

“Such fans pass the air through an activated carbon bed—so, they will remove some of the odorant compounds but not necessarily all of them, and you will need to regularly replace the activated carbon to keep it effective,” she said.

“In the bathroom, closing the toilet lid before flushing can help to control the emission of odor, or you could use a toilet bowl spray which forms a layer on top of the water—thereby limiting the emission of the odorant from the liquid phase.

“But again, these methods will limit some odorants—not all of them—because an odor is made up of many different types of compounds and quite challenging to control.”

Dr. Fisher said even fabrics were important in minimizing odor: natural fibers were best if you wanted to reduce bad smells.

“Be careful with the types of fabrics or surfaces in your home because odorants are attracted to certain materials,” she said.

“For example, people may experience more body odor if they’re wearing synthetic fibers because the odorant absorbs into those fibers and it’s harder to remove the odor through washing than with natural fibers.

“Cotton fabric ‘breathes’ better because there’s less microbial activity.”

Dr. Fisher’s final suggestion could apply to the workplace—the consideration of a “no perfume” policy.

“People can be sensitive to the smell of things like perfume or cologne. I’m not sure if it’s just because I work in The odor Lab, but sometimes you walk past people and you’re hit with how overpowering their perfume or body spray is,” she said.

“This reinforces how odor is an individual’s perception of how they perceive these compounds: everybody perceives them slightly differently and that can be due to explicit human variability or even cultural differences.

“So, it’s worth having open communication with your household or colleagues about what smells people like or don’t like in order to solve these problems, to avoid resorting to expensive technical solutions.”


Explore further

First steps to understanding biochemistry of how plants detect odors


https://www.wired.com/story/after-50-years-of-effort-researchers-made-silicon-emit-light/

After 50 Years of Effort, Researchers Made Silicon Emit Light

We’re approaching the speed limit for electronic computer chips. If we want to go faster, we’ll need data-carrying photons—and some tiny lasers.
A METAL ORGANIC VAPOR PHASE EPITAXY
This is the machine Eindhoven University of Technology physicist Erik Bakkers and his colleagues used to grow hexagonal silicon alloy nanowires.PHOTOGRAPH: NANDO HARMSEN/TU/E

NEARLY FIFTY YEARS ago, Gordon Moore, the cofounder of Intel, predicted that the number of transistors packed into computer chips would double every two years. This infamous prediction, known as Moore’s law, has held up pretty well. When Intel released the first microprocessor in the early 1970s, it had just over 2,000 transistors; today, the processor in an iPhone has several billion. But all things come to an end, and Moore’s law is no exception.

Modern transistors, which function as a computer’s brain cells, are only a few atoms long. If they are packed too tightly, that can cause all sorts of problems: electron traffic jams, overheating, and strange quantum effects. One solution is to replace some electronic circuits with optical connections that use photons instead of electrons to carry data around a chip. There’s just one problem: Silicon, the main material in computer chips, is terrible at emitting light.

Now, a team of European researchers says they have finally overcome this hurdle. On Wednesday, a research team led by Erik Bakkers, a physicist at Eindhoven University of Technology in the Netherlands, published a paper in Nature that details how they grew silicon alloy nanowires that can emit light. It’s a problem that physicists have grappled with for decades, but Bakkers says his lab is already using the technique to develop a tiny silicon laser that can be built into computer chips. Integrating photonic circuits on conventional electronic chips would enable faster data transfer and lower energy consumption without raising the chip’s temperature, which could make it particularly useful for data-intensive applications like machine learning.

“It’s a big breakthrough that they were able to demonstrate light emission from nanowires made of a silicon mixture, because these materials are compatible with the fabrication processes used in the computer chip industry,” says Pascal Del’Haye, who leads the microphotonics group at the Max Planck Institute for the Science of Light and was not involved in the research. “In the future, this might enable the production of microchips that combine both optical and electronic circuits.”

When it comes to getting silicon to spit out photons, Bakkers says it’s all about the structure. A typical computer chip is built upon a thin layer of silicon called a wafer. Silicon is an ideal medium for computer chips because it is a semiconductor—a material that only conducts electricity under certain conditions. This property is what allows transistors to function as digital switches even though they don’t have any moving parts. Instead, they open and close only when a certain voltage is applied to the transistor.

a researcher touching machinery while wearing protective gear
Elham Fadaly, a PhD student at Eindhoven University of Technology and the first author on the new paper, operates the machine that was used to grow silicon alloy nanowires.

Within the wafer, the silicon atoms are arranged as a cubic crystal lattice that allows electrons to move within the lattice under certain voltage conditions. But it doesn’t allow similar movement for photons, and that’s why light can’t move through silicon easily. Physicists have hypothesized that changing the shape of the silicon lattice so that it is composed of repeating hexagons rather than cubes would allow photons to propagate through the material. But actually creating this hexagonal lattice proved incredibly challenging, because silicon wants to crystalize in its most stable, cubic form. “People have been trying to make hexagonal silicon for four decades and have not succeeded,” says Bakkers.

Bakkers and his colleagues at Eindhoven have been working on creating a hexagonal silicon lattice for about a decade. Part of their solution involved using nanowires made of gallium arsenide as a scaffold to grow nanowires made of the silicon-germanium alloy that have the desired hexagonal structure. Adding germanium to the silicon is important for tuning the wavelength of the light and other optical properties of the material. “It took longer than I expected,” says Bakkers. “I expected to be here five years ago, but there was a lot of fine tuning of the whole process.”

To test if their silicon alloy nanowires emit light, Bakkers and his colleagues blasted them with an infrared laser and measured the amount of infrared light that made it out on the other side. The amount of energy Bakkers and his colleagues detected coming out of the nanowires as infrared light was close to the amount of energy the laser dumped into the system, which suggests that the silicon nanowires are very efficient at transporting photons.

The next step, says Bakkers, will be to use the technique they’ve developed to create a tiny laser made from the silicon alloy. Bakkers says his lab has already started work on this and may have a working silicon laser by the end of the year. After that, the next challenge will be figuring out how to integrate the laser with conventional electronic computer chips. “That would be very serious, but it’s also difficult,” Bakkers says. “We’re brainstorming to find a way to do this.”

Bakkers says he doesn’t anticipate that future computer chips will be entirely optical. Within a component, such as a microprocessor, it still makes sense to use electrons to move the short distances between transistors. But for “long” distances, such as between a computer’s CPU and its memory or between small clusters of transistors, using photons instead of electrons could increase computing speeds while reducing energy consumption and removing heat from the system. Whereas electrons must transmit data serially, one electron after the other, optical signals can transmit data on many channels at once as fast as physically possible—the speed of light.

Because photonic circuits can quickly shuffle large amounts of data around a computer chip, they are likely to find widespread use in data-intensive applications. For example, they could be a boon to the computers in self-driving cars, which have to process an immense amount of data from onboard sensors in real time. Photonic chips may also have more mundane applications. Since they won’t generate as much heat as electronic chips, data centers won’t need as much cooling infrastructure, which could help reduce their massive energy footprint.

Researchers and companies have already managed to integrate lasers into simple electronic circuits, but the processes were too complex and expensive to implement at scale, so the devices have only had niche applications. In 2015, a group of researchers from MIT, UC Berkeley, and the University of Colorado successfully integrated photonic and electronic circuits in a single microprocessor for the first time. “This demonstration could represent the beginning of an era of chip-scale electronic–photonic systems with the potential to transform computing system architectures, enabling more powerful computers, from network infrastructure to data centres and supercomputers,” the researchers wrote in the paper.

By demonstrating its application in the main ingredient in conventional computer chips, Bakkers and his colleagues have taken another major step toward practical light-based computing. Electronic computer chips have faithfully served our computing needs for half a century, but in our data-hungry world, it’s time to kick our processors up to light speed.

https://medicalxpress.com/news/2020-04-frailty-impacts-blood.html

Leaving its mark: How frailty impacts the blood

Leaving its mark: How frailty impacts the blood
22 different blood metabolites correlating to frailty, cognitive impairment and hypomobility were discovered. Although some of these metabolites were relevant to more than one disorder, frailty was found to have a distinct metabolomic profile. Credit: OIST

Globally, human society is aging. A side-effect of this is that age-related disorders, such as frailty, are becoming increasingly common. Frailty includes not only physical disabilities, but also a decline in cognitive function and an increase in various social problems. The prevalence of this disorder among those aged 65 and over is estimated at 120 million people worldwide.

But due to their small range of activities, people who suffer from  are often hidden. They tend to stay at home and out of the public eye. They can struggle to walk, suffer from , and find essential tasks, like putting out the rubbish or cleaning the house, very difficult. As such, frail people require more help than their healthy peers. And although there has been some indication that frailty may be reversible, no such interventions have yet been established.

The first step to curing frailty is to find an efficient way to diagnose the disorder. Researchers from the G0 Cell Unit at the Okinawa Institute of Science and Technology Graduate University (OIST), alongside collaborators at the Geriatric Unit at Kyoto University have taken a close look at the  of both frail and non- using a technique called metabolomics. They’ve found 15 metabolites whose levels in the  correlate with frailty. Their findings, published in PNAS, have shed light on what causes the disorder and how we might reverse it.

Measuring frailty

For this study, the researchers looked at 19 , all above the age of 75, and measured whether they suffered from frailty through three clinical analysis tests—the Edmonton frail scale (EFS), the Montreal cognition assessment (MoCA-J), and the Timed Up and Go Test (TUG).

“Both the EFS and the MoCA-J gave us an indication of the individuals cognitive function, whereas the TUG allowed us to assess their motor ability,” said Professor Mitsuhiro Yanagida, who runs the Unit at OIST. “Between them, they also showed , mood, short-term memory and other indications, so they gave us a clear idea of who suffered from the disorder.”

By using these three tests, the researchers found that nine out of the 19 individuals fit into the category of being frail whereas the other ten did not, however some still did suffer from  or hypomobility, a syndrome which hinders movement.

Identifying markers in the blood

Next, the researchers took blood samples from the 19 patients and had a close look at the metabolites—small molecules of amino acids, sugars, nucleotides and more that make up our blood. They tested 131 metabolites and found that 22 of them correlated with frailty, cognitive impairment and hypomobility. Patients who suffered from these  tended to have lower levels of most of these metabolites.

“Blood metabolites are useful as biomarkers for finding, diagnosing and observing symptoms of frailty,” said Dr. Takayuki Teruya, Research Unit Technician in the G0 Cell Unit. “By using a simple blood test, we could start to diagnose frailty early on and lengthen healthy life expectancies by early intervention.”

The 22 metabolites identified included antioxidant metabolites, amino acids and muscle or nitrogen related metabolites. Fifteen of them correlated with frailty, six indicated cognitive impairment and twelve indicated hypomobility. The metabolites that correlated with frailty overlapped with five of those that indicated cognitive impairment and six that indicated hypomobility.

These metabolites include some of the aging markers in healthy people reported by the same group in 2016. This suggests that the severity of biological aging, which varies between individuals, could be monitored from an early stage of old age by measuring blood biomarkers.

“Notably, we found that levels of the antioxidant, ergothioneine, decreased in the frail patients,” said Professor Yanagida, “This  is neuroprotective, meaning that people who suffer from frailty are more vulnerable to oxidative stress.”

The research indicates that frailty has a distinct metabolomic profile when compared to other age-related disorders. By demonstrating a link between these metabolites and the symptoms of the disorder, these findings could lead to a different approach to diagnosing and treating frailty.

The researchers in the G0 Cell Unit at OIST collaborated with Dr. Masahiro Kameda and Professor Hiroshi Kondoh in the Geriatric Unit, Graduate School of Medicine at Kyoto University. Kyoto University and OIST have jointly applied for a patent for these findings.


Explore further

Frailty may be highly predictive of complications, death in patients with mitral valve disease


More information: Masahiro Kameda el al., “Frailty markers comprise blood metabolites involved in antioxidation, cognition, and mobility,” PNAS (2020). www.pnas.org/cgi/doi/10.1073/pnas.1920795117

https://www.psypost.org/2020/04/viewing-tv-for-more-than-3-5%e2%80%89hours-per-day-is-associated-with-cognitive-decline-in-older-age-56350

Viewing TV for more than 3.5 hours per day is associated with cognitive decline in older age

It’s hard to imagine a world today without television and its immediate descendants like Netflix and Disney+. Considerable research has explored the effects of television on children, whose minds tend to be perceived as more susceptible to negative influence. In many ways, however, they are more resilient, while impaired cognition in the aging adult population is associated with increased all-cause mortality.

To better understand the extent to which television participates in cognitive decline in aging adults, researchers delved into data provided by the English Longitudinal Study of Aging (ELSA), focusing on roughly 3,600 adults aged 50 or above.

Unlike more creative pastimes that stimulate the brain and require active participation, or passive-stimulus pastimes like reading a book, television combines “strong rapidly-changing fragmentary dense sensory stimuli with passivity from the viewer.” This unique combination has been accompanied by conflicting results in previous studies, although links have been drawn with cognitive impairment, reduced memory, and increased risk of Alzheimer’s.

Previous research has used television as a proxy for sedentary behavior, confounding these related but distinct phenomena, and focused on excessive television watching (e.g. 6+ hours/day). To paint a picture that’s both clearer and more in line with television habits of the average aging adult, the authors controlled for sedentariness and used a more modest metric of 3.5 hours per day.

On top of self-reported television habits, ELSA contains data from frequent verbal memory tests, which the authors used as a measure of cognition. The study demonstrates that 3.5 hours of television results in reduced verbal memory, with a dose-response relationship (greater hours of television lead to poorer verbal memory). Furthermore, stronger initial verbal memory was associated with greater decline at follow-up. Finally, the authors found an important threshold effect, such that 3 hours of television was not associated with poorer cognition, but 3.5 hours was.

Studies like this one help us to better understand how television affects our brains as we age, and can help safeguard against negative effects. The fact that the authors were able to define a threshold under which television viewing was found considerably less harmful is an important takeaway for the aging population.

The study, “Television viewing and cognitive decline in older age: findings from the English Longitudinal Study of Ageing“, was authored by Daisy Fancourt and Andrew Steptoe.

https://medicalxpress.com/news/2020-04-visual-events-milliseconds-brain-unnoticed.html

It’s now or never: Visual events have 100 milliseconds to hit brain target or go unnoticed

It's now or never: Visual events have 100 milliseconds to hit brain target or go unnoticed
Slice of mouse brain showing neurons of the superior colliculus highlighted on one side. Credit: Lupeng Wang, Ph.D. and Charles Gerfen, Ph.D.

Researchers at the National Eye Institute (NEI) have defined a crucial window of time that mice need to key in on visual events. As the brain processes visual information, an evolutionarily conserved region known as the superior colliculus notifies other regions of the brain that an event has occurred. Inhibiting this brain region during a specific 100-millisecond window inhibited event perception in mice. Understanding these early visual processing steps could have implications for conditions that affect perception and visual attention, like schizophrenia and attention deficit hyperactivity disorder (ADHD). The study was published online in the Journal of Neuroscience.

“One of the most important aspects of vision is fast detection of important events, like detecting threats or the opportunity for a reward. Our result shows this depends on visual processing in the midbrain, not only the “, said Richard Krauzlis, Ph.D., chief of the Section on Eye Movements and Visual Selection at NEI and senior author of the study.

Visual perception—one’s ability to know that one has seen something—depends on the eye and the brain working together. Signals generated in the retina travel via retinal ganglion cell nerve fibers to the brain. In mice, 85% of retinal ganglion cells connect to the . The superior colliculus provides the majority of early visual processing in these animals. In primates, a highly complex visual cortex takes over more of this visual processing load, but 10% of  still connect to the superior colliculus, which manages basic but necessary perceptual tasks.

One of these tasks is detecting that a visual event has occurred. The superior colliculus takes in information from the retina and cortex, and when there is sufficient evidence that an event has taken place in the visual field, neurons in the superior colliculus fire. Classical experiments into perceptual decision-making involve having a subject, like a person or a monkey, look at an image of vertical grating (a series of blurry vertical black and white lines) and decide if or when the grating rotates slightly. In 2018, Krauzlis and Wang adapted these classic experiments for mice, opening up new avenues for research.

“Although we have to be cautious translating data from mice to humans, because of the difference in visual systems, mice have many of the same basic mechanisms for event detection and visual attention as humans. The genetic tools available for mice allow us to study how specific genes and neurons are involved in controlling perception,” said Lupeng Wang, Ph.D., first author of the study.

In this study, Wang and colleagues used a technique called optogenetics to tightly control the activity of the superior colliculus over time. They used genetically modified mice so that they could turn neurons in the superior colliculus on or off using a beam of light. This on-off switch could be timed precisely, enabling the researchers to determine exactly when the neurons of the superior colliculus were required for detecting visual events. The researchers trained their mice to lick a spout when they’d seen a visual event (a rotation in the vertical grating), and to avoid licking the spout otherwise.

Inhibiting the cells of the superior colliculus made the mice less likely to report that they’d seen an event, and when they did, their decision took longer. The inhibition had to occur within a 100 millisecond (one-tenth of a second) interval after the visual event. If the inhibition was outside that 100-millisecond timeframe, the mouse’s decisions were mostly unaffected. The inhibition was side-specific: because the retinal cells cross over and connect to the superior colliculus on the opposite side of the head (the left eye is connected to the right superior colliculus and vice versa), inhibiting the right side of the superior colliculus depressed responses to stimuli on the left side, but not on the right.

“The ability to temporarily block the transmission of neural signals with such precise timing is one of the great advantages of using optogenetics in mice and reveals exactly when the crucial signals pass through the circuit,” said Wang.

Interestingly, the researchers found that the deficits with superior colliculus inhibition were much more pronounced when the mice were forced to ignore things happening elsewhere in their visual field. Essentially, without the activity of the superior colliculus, the  were unable to ignore distracting visual events. This ability to ignore visual events, called , is critical for navigating the complex visual environments of the real world.

“The superior colliculus is a good target for probing these functions because it has a neatly organized map of the visual world. And it is connected to less neatly organized regions, like the basal ganglia, which are directly implicated in a wide range of neuropsychiatric disorders in humans,” said Krauzlis. “It’s sort of like holding the hand of a friend as you reach into the unknown.”


Explore further

Mapping the relay networks of our brain


More information: Lupeng Wang et al, A causal role for mouse superior colliculus in visual perceptual decision-making, The Journal of Neuroscience (2020). DOI: 10.1523/JNEUROSCI.2642-19.2020

Journal information: Journal of Neuroscience

https://www.sciencedaily.com/releases/2020/04/200407131501.htm

How serotonin balances communication within the brain

Optogenetics

Date:
April 7, 2020
Source:
Ruhr-University Bochum
Summary:
Our brain is steadily engaged in soliloquies. These internal communications are usually also bombarded with external sensory events. Hence, the impact of the two neuronal processes need to be permanently fine-tuned to avoid their imbalance. A team of scientists has revealed the role of the neurotransmitter serotonin in this scenario. They discovered that distinct serotonergic receptor types control the gain of both streams of information in a separable manner.

Our brain is steadily engaged in soliloquies. These internal communications are usually also bombarded with external sensory events. Hence, the impact of the two neuronal processes need to be permanently fine-tuned to avoid their imbalance. A team of scientists at the Ruhr-Universität Bochum (RUB) revealed the role of the neurotransmitter Serotonin in this scenario. They discovered that distinct serotonergic receptor types control the gain of both streams of information in a separable manner. Their finding may facilitate new concepts of diagnosis and therapy of neuronal disorders related to malfunction of the serotonin system. The study is published online in the open access journal eLife on 7. April 2020.

Impacting on different streams of information in the brain

“The following everyday life example may sketch the task that the brain needs to solve,” explains Dr. Dirk Jancke, Head of the Optical Imaging Group at the Institute of Neural Computation: “Imagine sitting with your family at dinner, a heated debate is going on how to properly organise some internal affairs. Suddenly the phone starts ringing; you are picking up while family discussion goes on. In order to understand the calling party correctly, the crowd in the back must speak lower or the caller needs to speak up. Thus, the loudness of each internal background conversation and external call need to be properly adjusted to ensure non-interfered — that means separable — information transfer.” As in this anecdote, comparable brain processes involve serotonin.

Serotonin is a neurotransmitter of the central nervous system, in common parlance called “Happy hormone” because it contributes to changes in brain state and is often associated with effects on mood. The study of the RUB team now demonstrates that serotonin participates also in the scaling of current sensory input and ongoing brain signals.

Controlling neuronal release of serotonin with light

The RUB neuroscientists discovered the underlying mechanisms in experiments that investigated cortical processing of visual information. For their study, they used genetically modified mice in which the release of serotonin could be controlled by light. This mouse line was developed by the group of Professor Stefan Herlitze, Department of General Zoology and Neurobiology, to enable specific activation of serotonergic neurons by an implanted light fiber.

Combining this technique with optical imaging, the RUB team found that increasing levels of serotonin in the visual brain leads to concurrent suppression of ongoing activity and activity evoked by visual stimuli. Two types of receptors played a distinct major role here. “This was surprising to us, because both receptors are not only co-expressed in specific neurons but also widely distributed across different cell types in the brain,” says Zohre Azimi, first author of the study. Separable action of these receptors allows distinct modulations of information carrying internal brain communication and evoked sensory signals. Low serotonin levels, as they typically occur during sleep at night, favor internal brain communication, and thus, may promote important functions of dreaming. “Dysfunction in the interplay of these receptors, on the other hand, harbor the risk of an overemphasis of either internally or externally driven information channels,” says Jancke. For example, irregular 5-HT receptor distributions caused by genetic predisposition may become manifest in an imbalanced perception of inner and outside world, similar as seen in clinical pictures of depression and autism.

Facilitating understanding of serotonin effects

The scientists hope that their findings contribute to a better understanding of how serotonin affects fundamental brain processes. In turn, their study may trigger future research in developing receptor-specific drugs that benefit patients with serotonin-related psychiatric diseases.

make a difference: sponsored opportunity

Story Source:

Materials provided by Ruhr-University BochumNote: Content may be edited for style and length.


Journal Reference:

  1. Zohre Azimi, Ruxandra Barzan, Katharina Spoida, Tatjana Surdin, Patric Wollenweber, Melanie D Mark, Stefan Herlitze, Dirk Jancke. Separable gain control of ongoing and evoked activity in the visual cortex by serotonergic inputeLife, 2020; 9 DOI: 10.7554/eLife.53552

Cite This Page:

Ruhr-University Bochum. “How serotonin balances communication within the brain: Optogenetics.” ScienceDaily. ScienceDaily, 7 April 2020. <www.sciencedaily.com/releases/2020/04/200407131501.htm>.

https://www.sfu.ca/sfunews/stories/2020/04/relationships-and-physical-distancing–advice-from-an-sfu-expert.html

Relationships and physical distancing: advice from an SFU expert

April 06, 2020
  Pinterest  Print

As Canadians adhere to strict physical distancing measures imposed to slow the spread of COVID-19, interpersonal relationships are being forced to rapidly evolve. We asked SFU psychology expert, Yuthika Girme, to share advice on growing and maintaining strong relationships in an era of physical distancing.

Q: I met someone on Tinder. Can I still go on a date with them?

A: We want to avoid interacting with people in real life but we can definitely still date by drawing on the smart technology that is available to us. People should look at this as an opportunity to get creative and find new ways to connect and share. For example, couples can use Skype or Zoom to chat online or even watch Netflix together using Netflix Party without having to meet in person.

Q: Should ‘date night’ be cancelled?

A: Date nights should absolutely not be cancelled. Even though many couples are now spending more time together at home, they are likely preoccupied and not focusing as much time on each other. This makes having date nights — which are really about carving out time to reconnect with our partners — all the more important, even if it’s just time spent at home.

Q: My partner and I are stressed and arguing more often. Do you have any tips for dealing with conflict during this time? 

A: My biggest tip for couples finding that they are arguing more often right now is to be kind to one another. It can also be helpful to take a step back and ask yourself what a neutral party would think about the conflict you are having and write it down in a shared diary. Doing this can help put things into perspective. Past research has shown that this type of exercise can be a satisfying way for couples to share their thoughts and build stronger relationships.

Q: I have to live apart from my partner. Do you have tips for dealing with long distance relationships?

A: Just because we are physically distant from loved ones doesn’t mean we have to be emotionally distant from them. We are really fortunate that we live in a digital age that allows us to see, hear, and interact with other people. Taking part in online games or virtual activities can help maintain an emotional connection while also keeping things entertaining.

Q: I’m in a new relationship, how do I prevent it from fizzling out now that we are apart?

A: For people who have just recently met someone or have just started dating someone, get creative. Fun online games that couples can play together or with friends can help keep these interactions fun and lighthearted. A closeness building exercise called 36 Questions for Increasing Closeness might actually be a great way to build deeper connections with people we just met and are interested in romantically, but it’s also a great exercise for strengthening friendship bonds too.

Q: Being in isolation, how can we continue to celebrate big events like anniversaries and birthdays? 

A: Regardless of whether people are celebrating a big event that includes a large virtual hangout or a smaller more intimate event like an anniversary, small changes to the home atmosphere can help make any celebration feel more special. Changing the layout of your home, putting up decorations, all those simple things can really make a difference, especially if we have been stuck in the same place for an extended period of time.

Q: I’m single. How do I avoid going stir crazy home alone?

A: People who are living alone or single might start to feel more isolated as time goes on. It’s really important for those people to reach out to their friends and family for a game or virtual hangout to maintain that social connection. Continuing with daily health and fitness routines can also help to keep people active and on a regular schedule.

Q: Do you have any final words of advice for couples coping with this crisis? 

A: We just need to be kind to ourselves and to each other and also realize that this isn’t going to be forever. This is one hiccup that we are all facing together and we need to hang in there and know there are going to be better days again where we can meet up with our family and our friends in real life. This is a momentary challenge and there will be better days at the end of this if we all play our part.

Yuthika Girme

Yuthika Girme, assistant professor, psychology, leads the Supporting Relationships and Wellbeing Lab at Simon Fraser University. Her primary research goals involve identifying ways that people can effectively provide support and generate closeness in their romantic relationships. Girme also conducts research on understanding both the challenges and benefits of singlehood, with the goal of identifying factors that maximize single individuals’ wellbeing.

https://www.psychologytoday.com/us/blog/sense-time/202004/strategies-against-boredom-during-social-isolation

Strategies Against Boredom During Social Isolation

How to cope with social isolation and ’empty time’ during the pandemic.

Posted Apr 06, 2020

 

Last week we published our study led by Joanna Witowska from the University of Warsaw, Poland, on how the ability to self-regulate helps counter the effects of boredom while waiting alone in a room. The study was conducted in 2018, but it took us a year to get it published. It is a coincidence that just now, when so many people are living in isolation because of the pandemic, our research finally went to print.

In a previous Psychology Today post, “The Case of Boredom: Enduring Empty Time,” I argued that to be bored means to be bored with myself, “I cannot stand the presence of myself at this moment; I would prefer to be distracted from myself.” Therein lies the solution to boredom. It is all about self-regulation. Successful self-regulation involves several components, but in essence, it means that we are able to cope with unpleasant feelings that we have in a given situation that is not under our control. Social isolation is such a situation. We have no direct physical contact with our friends and colleagues from work. We cannot distract ourselves in the local pub or cinema. What can we do?

A quick summary of our study. What did we do and what did we find? Of course, we could not isolate people for several weeks. We had 99 people wait alone in an empty room for 7.5 minutes and afterward asked them to report their impressions regarding the experienced time and their emotional reactions to the situation. As expected, boredom was associated with the feeling of time passing slowly. Time stretched when people felt bored. We also were interested in individual differences related to self-control. Here, we found clear evidence that people who were more able to emotionally self-regulate in daily life, as assessed with a questionnaire, were less aware of time during the waiting situation and had lower levels of boredom.

What does self-control mean? We know the escape routes we can take to counteract boredom: we switch on the TV, surf the web, listen to our favorite music, or read a book. That is instrumental in coping with a situation, which is important in times of a pandemic. In essence, it means you should get absorbed in an activity so that you are less aware of yourself and the passage of time.

However, in the waiting situation in our study, individuals had no means of distraction. We had taken away everything, the cellphone, books, anything that could be distracting, and the subjects were confined in a boring room. This requires emotional coping and cognitive restructuring. Individuals with more self-control can cope with the situation better, as they can self-regulate by briefly re-thinking the situation. For example, they could say to themselves, “We often complain that, due to our busy lives, we do not have enough time. Now, in the waiting room, just for a few minutes, I have time for myself, I can relax.” That is emotional coping.

The empty time experienced during social isolation triggers a number of emotions in many of us, such as frustration and annoyance, even anxiety and episodes of depression. Individuals with more self-control have the propensity to find solutions for themselves and their emotional reactions to a given situation.

Instrumental coping means, “Do something meaningful; find an activity that gets you going, a project with a meaningful goal.” When you have a future-oriented goal, you organize your life in little steps of activity on which you concentrate, away from your worrying self and time. Our previous research has shown that more future-oriented individuals feel that time passes more quickly in daily life.

Emotional coping means, “I can realize that this is a very special moment. There is a pause in my daily routine. I can think about what I have been doing over the last few years. Do I want to continue that way? I now have the chance to reconsider options in my life. How are my relations with other people? I can backpedal and eventually readjust my life?”

Have you ever wondered how astronauts or researchers in Antarctica spend their relative isolation over months and how they cope with frustration and boredom? Watch a fascinating talk by Anna Yusupova on the YouTube channel of the International Time Perspective Network.

Anna is a researcher at the Institute for Biomedical Problems in the Laboratory for Cognitive and Social Psychology in Moscow, Russia. She studies the effects of isolation in astronauts who have to endure many months of isolation in the international space station ISS. How do the Russian cosmonauts cope with this absolute quarantine far from earth? Nearly every minute is filled with routine activities. Empty time hardly occurs because their daily schedules are filled with so many future-oriented goals. They only have a little time for themselves between prescribed activities when they can savor the moment by looking at planet earth turning below them.

We are now all, more or less, astronauts in our confined spaces. Let’s use our time in a meaningful way and find out what we really want to do.

References

Witowska J, Schmidt S, Wittmann M (2020).What happens while waiting? How self-regulation affects boredom and subjective time during a real waiting situation. Acta Psychologica 205 https://doi.org/10.1016/j.actpsy.2020.103061

Wittmann M, Rudolph T, Linares Gutierrez D, Winkler I (2015). Time perspective and emotion regulation as predictors of age-related subjective passage of time. International Journal of Environmental Research and Public Health 12, 16027-16042.

https://www.msn.com/en-us/news/technology/artificial-intelligence-may-be-pandemic-lifesaver-one-day/ar-BB11Iuu8

On December 30, researchers using artificial intelligence systems to comb through media and social platforms detected the spread of an unusual flu-like illness in Wuhan, China.

It would be days before the World Health Organization released a risk assessment and a full month before the UN agency declared a global public health emergency for the novel coronavirus.

Could the AI systems have accelerated the process and limited, or even arrested, the extent of the COVID-19 pandemic?

Clark Freifeld, a Northeastern University computer scientist working with the global disease surveillance platform HealthMap, one of the systems detecting the outbreak, said it remains an open question.

“We identified the early signals, but the reality is it’s hard to tell when you have an unidentified respiratory illness if it’s a really serious situation,” said Freifeld.

Dataminr, a real-time risk detection technology firm, said it delivered the earliest warning about COVID-19 on December 30 based on eyewitness accounts from inside Wuhan hospitals, pictures of the disinfection of the Wuhan seafood market where the virus originated and a warning by a Chinese doctor who later died from the virus himself.

Scientists are using machine learning to comb through thousands of research papers to find potential breakthroughs in treating the novel coronavirus ravaging the planet© Lars Hagberg Scientists are using machine learning to comb through thousands of research papers to find potential breakthroughs in treating the novel coronavirus ravaging the planet 

“One of our biggest challenges is we tend to be reactive in these situations, it’s human nature,” said Kamran Khan, founder and chief executive of the Toronto-based disease tracking firm BlueDot, one of the early systems that flashed warning flags in December over the epidemic.

“Whenever you’re dealing with a new, emerging disease, you don’t have all the answers. Time is your most valuable resource; you cannot get it back.”

Khan, who is also a professor of medicine and public health at the University of Toronto, told AFP by telephone the data showed “echoes of the SARS outbreak 17 years earlier, but we didn’t know was how contagious this was.”

Nevertheless, AI systems have proven to be valuable in tracking epidemics by scouring a diverse array of sources ranging from airline bookings, Twitter and Weibo messages to news reports and sensors on connected devices.

– Humans in the loop –

Still, Freifeld said AI systems have limits, and the big decisions must still be made by humans.

“We use the AI system as a force multiplier, but we are committed to the concept of having humans in the loop,” he said.

AI and machine learning systems are likely to help the battle in several ways, from tracking the outbreak itself to speeding up drug testing.

“We can run simulations unlike we’ve ever done before, we understand biological pathways unlike we’ve ever understood before, and that’s all because of the power of AI,” said Michael Greeley of the equity firm Flare Capital Partners, which has invested in several AI medical startups.

But Greeley said it remains challenging to apply these technologies to sectors like drug delivery where the normal testing time can be years.

“There is extraordinary pressure on the industry to start using these tools even though they may not be ready for prime time,” he said.

According to Khan, AI is helping in the containment phase with systems that used “anonymized” smartphone location data to track the progression of the disease and find hotspots, and to determine if people are following “social distancing” guidelines.

Andrew Kress, CEO of the health technology firm HealthVerity, said it remains challenging to collect medical data for disease outbreaks while complying with patient privacy.

It’s possible to detect trends with signals such as pharmacy visits and sales of certain medications or even online searches, Kress said, but aggregating that has privacy implications.

“We need to have a real discussion about balance and utility around specific use cases and potentially the right kind of research to continue to figure out new ways to leverage some of these nontraditional data sources,” Kress said.

– Data mining –

AI systems are also being put to work to scour the thousands of research studies for clues on what treatments might be effective.

Last week, researchers joined the White House in an effort to make available some 29,000 coronavirus research articles that can be scanned for data mining.

The effort brought together the Allen Institute for AI, Chan Zuckerberg Initiative, Microsoft, Georgetown University and others.

Through Kaggle, a machine learning and data science community owned by Google, these tools will be openly available for researchers around the world.

“It’s difficult for people to manually go through more than 20,000 articles and synthesize their findings,” said Kaggle CEO and co-founder Anthony Goldbloom.

“Recent advances in technology can be helpful here. We’re putting machine-readable versions of these articles in front of our community of more than four million data scientists. Our hope is that AI can be used to help find answers to a key set of questions about COVID-19.”

https://www.techradar.com/news/iphone-12-pro-leak-shows-a-quad-lens-camera-with-a-new-ipad-pro-feature

iPhone 12 Pro leak shows a quad-lens camera with a new iPad Pro feature

The iPhone 12 Pro could have more lenses than the iPhone 11 Pro Max, above (Image credit: Future)

It looks like Apple could add a new camera lens to the iPhone 12 Pro and iPhone 12 Pro Max, as a sketch showing exactly that has reportedly been found in a leaked build of iOS 14.

The image, which was spotted by conceptsiphone and corroborated by numerous other sources and leakers, shows a square camera block in the top left corner of the phone, just like the iPhone 11 Pro has, but this time there are four lenses squeezed in rather than three.

While we can’t be certain what any of the lenses are, three of them look broadly the same as the 12MP main, telephoto and ultra-wide ones on the iPhone 11 Pro and iPhone 11 Pro Max, while the fourth (the grey one) looks a whole lot like the LiDAR (Light Detection and Ranging) scanner that Apple recently debuted on the iPad Pro 2020.

(Image credit: conceptsiphone)

This scanner can determine distance by measuring how long it takes light to reach an object and reflect back again, and it’s a feature that can power advanced augmented reality features and potentially improve Portrait mode.

We’d say it’s a likely inclusion too, as not only does this leak have a lot of backing, it would also seem odd for Apple to leave one of its tablets with a more advanced camera feature than a newer flagship phone. Plus, a number of previous leaks and rumors had pointed to some form of 3D or depth-sensing scanner being added.

While the addition of a LiDAR scanner is the main takeaway from this leaked image, it does also look as though the other three lenses might be slightly larger than on the iPhone 11 Pro range. So perhaps they will be improved in some way, with larger sensors or more megapixels.

We’d expect they’ll still take on main, telephoto and ultra-wide roles though, as those are the three most common and useful lens types on smartphones.

We probably won’t find out for sure for a long time yet, as the iPhone 12 range isn’t expected to land before September – and it might even land later, due to Covid-19 potentially slowing down development.

But we’d expect plenty more leaks and rumors in the meantime, and TechRadar will cover all the credible ones, so stay tuned for updates.