https://www.sciencealert.com/some-people-can-t-picture-things-in-their-mind-and-it-might-make-it-hard-for-them-to-remember

People Who Can’t See Things in Their Mind Could Have Memory Trouble Too, Study Finds

TESSA KOUMOUNDOUROS27 JUNE 2020

Not everyone can see pictures in their minds when they close their eyes and summon thoughts – an ability many of us take for granted.

While people have been aware of this phenomenon since the 1800s, it hasn’t been widely studied, and was only recently named ‘aphantasia’. This absence of voluntarily generated mental visual imagery is thought to be experienced by 2-5 percent of the population.

Recent studies suggest aphantasia is indeed a lack of visual imagery rather than the lack of awareness of having internal visual imagery – with some people experiencing loss of this ability after injuries.

Now new research has revealed that aphantasics also have other cognitive differences.

“We found that aphantasia isn’t just associated with absent visual imagery, but also with a widespread pattern of changes to other important cognitive processes,” said cognitive neuroscientist Alexei Dawes from Australia’s University of New South Wales (UNSW Sydney).

Dawes and colleagues asked 667 people (267 of them who self-identified as having aphantasia) a series of eight questionnaires on visualisation, memory, dreaming, and response to trauma.

This included the Vividness of Visual Imagery Questionnaire – you can find a version of that here at The Aphantasia Network – where participants were asked to rate the level of vividness of memories from one: “no image at all, I only ‘know’ that I am recalling the memory” to five: “as vivid as normal vision”.

“People with aphantasia reported a reduced ability to remember the past, imagine the future, and even dream. This suggests that visual imagery might play a key role in memory processes,” explained Dawes.

Not only did aphantasics dream less often, their dreams were less vivid and had lower sensory details. 

“This suggests that any cognitive function involving a sensory visual component – be it voluntary or involuntary – is likely to be reduced in aphantasia,” said cognitive neuroscientist Joel Pearson, director of UNSW Future Minds Lab.

Some of those with aphantasia also reported decreased imagining with other senses.

“Our data also showed that individuals with aphantasia not only report being unable to visualise, but also report comparatively reduced imagery, on average, in all other sensory modalities, including auditory, tactile, kinesthetic, taste, olfactory and emotion,” the team wrote in their paper.

This backs up personal reports from aphantasics exploring their own experiences with aphantasia. Aphantasic Alan Kendle shares the moment he realised that, unlike him, other people can hear music playing in their minds.

“I could not comprehend it initially, the ability to play music in the mind for me, was extraordinary – almost like a magic trick seen on television,” he wrote.

But not all of those with visual aphantasia had their other sensory imaginings missing, suggesting variations in this way of experiencing our inner minds.

The researchers note that as their study relied on self-reporting so their results may be influenced by response biases, where people who identify themselves one way will answer questions according to how they believe that identity would.

But other aspects of the findings suggest self-reporting may not be biasing the results that significantly: there were variations in answers coupled with data suggesting that spatial abilities – an ability to map relationships and distances between objects – appear to be unaffected in the volunteers.

“We’re only just starting to learn how radically different the internal worlds of those without imagery are,” concluded Dawes.

There is still so much to explore about how we each experience our internal minds’ world. If you think you might have aphantasia, you can help researchers understand more about this phenomenon by signing up here.

This research was published in Scientific Reports.

Learn More

  1. Brain Tumor Molecular Features Characterized in Individuals With Tumor Predisposition ConditionPrecision Oncology News, 2018
  2. HIV Self-Test Distribution Boosts Testing, Awareness, Study SaysDoug Macron et al., 360Dx, 2019
  1. Sophia Genetics Gets CE-IVD Mark for Cancer Detection SolutionPrecision Oncology News, 2019
  2. How to Develop a Clinical NGS Assay Without Losing Your Mind or Your ShirtGenomeWeb

https://www.core77.com/posts/100422/Apples-UI-Design-Aesthetic-Moving-Towards-Neumorphism

Apple’s UI Design Aesthetic Moving Towards Neumorphism

It looks like the design trend is here to stay, at least for a few years

By Rain Noe – June 26

Neumorphism (from neo-skeuomorphism) is the latest style of designing digital interfaces. Shortly after it emerged last year–Designer Zero was Alexander Plyuto with his design for a mobile banking interface below, according to UX Collective–some claimed it would become a 2020 design trend, while others insisted “Neumorphism will NOT be a huge trend in 2020.”

Alexander Plyuto

If a company as influential as Apple were to adopt neumorphism, the argument would be settled. And after they introduced their upcoming Big Sur OS update at WWDC, revealing changes to the icons that viewers quickly picked up on…https://tpc.googlesyndication.com/safeframe/1-0-37/html/container.html

Max Rudberg@maxrudberg

macOS Big Sur app icons

View image on Twitter
View image on Twitter
View image on Twitter
View image on Twitter

93Twitter Ads info and privacy27 people are talking about this

…it seems the trend will stick around, at least for a few years.

So what exactly is neumorphism? Think of it as a course correction, the design version of steering into the skid in an endlessly fishtailing car that still somehow manages to move forwards. I look at neumorphism as the fourth phase/movement/trend in the representative elements of graphic user interface design. To explain, when Apple came out with the first Macintosh in 1984, the icons looked like this:

Original Macintosh icons

Designer Susan Kare’s task was to convey pictographic information with a minimal supply of pixels. She did this brilliantly.https://tpc.googlesyndication.com/safeframe/1-0-37/html/container.html

By the time Apple came out with the first iPhone in 2007, the home screen looked like this:

iPhone OS 1

Those icons were all skeuomorphic, i.e. tiny renderings of things rather than icons, even if they kept the name “icon.” Advances in digital design are partly about growing bored with the last style and partly about showing off the technology. These skeuomorphic icons say “Look at the resolution we can achieve with this handheld screen!”

After six years of this, Jony Ive apparently felt the car starting to slide, and steered into the skid to get it to slide the other way. In 2013 Apple revealed that they were going with the “flat” aesthetic for iOS 7:

iOS 7https://tpc.googlesyndication.com/safeframe/1-0-37/html/container.html

Gradations were allowed, but no discrete highlights or shading. Everything was simplified or, as the name suggests, flattened. A dash of showing off, here and there: Look at the absurdly fine teeth on the gears for the “Settings” icon, the thinness of the numbers on the clock and calendar, the fineness of the reticle on the compass. These could only be achieved with the improved resolution of the evolving iPhone.

Which brings us to neumorphism. As the Input Mag article “Apple, Big Sur, and the rise of Neumorphism” explains,

“When you boil it down, neumorphism is a focus on how light moves in three-dimensional space…. What sets neumorphism apart from its progenitor is that the focus is on the light itself and how it interacts with a variety of objects in a purely digital space.”

This style has been wielded and named by designers of speculative interfaces, often using very few to no colors at all. Some examples:

Filip Legierski

Voicu Apostol https://tpc.googlesyndication.com/safeframe/1-0-37/html/container.html

Elena Zelova

Alexander Plyuto

At the risk of oversimplifying it, the aesthetic is very Kenya Hara. It’s something between skeuomorphism and flat, but way closer to the (f)latter. It’s clean, minimal, and the current examples stick rigidly to basic geometry, with the most advanced form being a squircle. And particularly with the inclusion of the Braun logo in the example above, you can see at least one source of inspiration.https://tpc.googlesyndication.com/safeframe/1-0-37/html/container.html

At their recent Worldwide Developer’s Conference, Craig Federighi, Apple’s senior vice president of Software Engineering, pulled the sheet off of their forthcoming Big Sur OS update, hailing it as “our biggest update to design in more than a decade.” The Big Sur icons revealed in the presentation look like this:

With the exception of the Calendar icon, all of them have shading and highlights from a consistent 12 o’clock light source. They’re no longer flat, and are certainly not skeuomorphic, but they’re moving back towards three-dimensionality, as seen in the word bubble of Messages, the Calculator, the Mac icon.

If we look at a side-by-side comparison of the Big Sur icons with the current ones, we can see even more clearly what Apple’s doing:

They’re doing away with irregular icon shapes, constraining them all within squircles. I suppose this is in an effort to reduce visual chaos, but to my eye, it will make the icons more difficult to distinguish from each other. This move isn’t surprising; Apple has always prized aesthetics over UX.

In any case, with Apple tacitly endorsing neumorphism, we can expect to see the style popping up (no pun intended) all over the place.

https://techcrunch.com/2020/06/24/biased-ai-perpetuates-racial-injustice/

Biased AI perpetuates racial injustice

Miriam Vogel@VogelMiriam / 2:04 pm PDT•June 24, 2020 CommentBusinessman assembling puzzle of collage of smiling face

Image Credits: ohn+M Lund Photography Inc (opens in a new window)/ Getty ImagesMiriam VogelContributorMiriam Vogel is the president and CEO of EqualAI, a nonprofit organization focused on reducing unconscious bias in artificial intelligence.

The murder of George Floyd was shocking, but we know that his death was not unique. Too many Black lives have been stolen from their families and communities as a result of historical racism. There are deep and numerous threads woven into racial injustice that plague our country that have come to a head following the recent murders of George Floyd, Ahmaud Arbery and Breonna Taylor.

Just as important as the process underway to admit to and understand the origin of racial discrimination will be our collective determination to forge a more equitable and inclusive path forward. As we commit to address this intolerable and untenable reality, our discussions must include the role of artificial intelligence (AI) . While racism has permeated our history, AI now plays a role in creating, exacerbating and hiding these disparities behind the facade of a seemingly neutral, scientific machine. In reality, AI is a mirror that reflects and magnifies the bias in our society.

I had the privilege of working with Deputy Attorney General Sally Yates to introduce implicit bias training to federal law enforcement at the Department of Justice, which I found to be as educational for those working on the curriculum as it was to those participating. Implicit bias is a fact of humanity that both facilitates (e.g., knowing it’s safe to cross the street) and impedes (e.g., false initial impressions based on race or gender) our activities. This phenomenon is now playing out at scale with AI.

As we have learned, law enforcement activities such as predictive policing have too often targeted communities of color, resulting in a disproportionate number of arrests of persons of color. These arrests are then logged into the system and become data points, which are aggregated into larger data sets and, in recent years, have been used to create AI systems. This process creates a feedback loop where predictive policing algorithms lead law enforcement to patrol and thus observe crime only in neighborhoods they patrol, influencing the data and thus future recommendations. Likewise, arrests made during the current protests will result in data points in future data sets that will be used to build AI systems.

This feedback loop of bias within AI plays out throughout the criminal justice system and our society at large, such as determining how long to sentence a defendant, whether to approve an application for a home loan or whether to schedule an interview with a job candidate. In short, many AI programs are built on and propagate bias in decisions that will determine an individual and their family’s financial security and opportunities, or lack thereof — often without the user even knowing their role in perpetuating bias.

This dangerous and unjust loop did not create all of the racial disparities under protest, but it reinforced and normalized them under the protected cover of a black box.

This is all happening against the backdrop of a historic pandemic, which is disproportionately impacting persons of color. Not only have communities of color been most at risk to contract COVID-19, they have been most likely to lose jobs and economic security at a time when unemployment rates have skyrocketed. Biased AI is further compounding the discrimination in this realm as well.

This issue has solutions: diversity of ideas and experience in the creation of AI. However, despite years of promises to increase diversity — particularly in gender and race, from those in tech who seem able to remedy other intractable issues (from putting computers in our pockets and connecting with machines outside the earth to directing our movements over GPS) — recently released reports show that at Google and Microsoft, the share of technical employees who are Black or Latinx rose by less than a percentage point since 2014. The share of Black technical workers at Apple has not changed from 6%, which is at least reported, as opposed to Amazon, which does not report tech workforce demographics.

In the meantime, ethics should be part of a computer science-related education and employment in the tech space. AI teams should be trained on anti-discrimination laws and implicit bias, emphasizing that negative impacts on protected classes and the real human impacts of getting this wrong. Companies need to do better in incorporating diverse perspectives into the creation of its AI, and they need the government to be a partner, establishing clear expectations and guardrails.

There have been bills to ensure oversight and accountability for biased data and the FTC recently issued thoughtful guidance holding companies responsible for understanding the data underlying AI, as well as its implications, and to provide consumers with transparent and explainable outcomes. And in light of the crucial role that federal support is playing and our accelerated use of AI, one of the most important solutions is to require assurance of legal compliance with existing laws from the recipients of federal relief funding employing AI technologies for critical uses. Such an effort was started recently by several members of Congress to safeguard protected persons and classes — and should be enacted.

We all must do our part to end the cycles of bias and discrimination. We owe it to those whose lives have been taken or altered due to racism to look within ourselves, our communities and our organizations to ensure change. As we increasingly rely on AI, we must be vigilant to ensure these programs are helping to solve problems of racial injustice, rather than perpetuate and magnify them.

https://medicalxpress.com/news/2020-06-cognitive-therapy-effect-hypochondriacs.html

Cognitive therapy has lasting effect on hypochondriacs

by University of Bergen

anxiety
Credit: CC0 Public Domain

People that suffer from health anxiety (Hypochondriacs) use very much of their time and energy on checking whether or not they have a serious disease. This has often negative effects on their social life, work, and family life, to the extent that their quality of life is strongly reduced.

Researchers at the University of Bergen have found that only 16 hours of cognitive behavioral therapy (CBT) can have very positive effects on hypochondriacs 10 years after treatment.

“This is the first study that follows up hypochondriacs for such a long period. It shows that CBT has good effects both one year and 10 years after therapy,” says psychiatrist Kari-Elise Veddegjærde, Ph.D. candidate at the Department of Clinical Science at the University of Bergen (UiB). The study is published in The British Journal of Psychiatry.

In the study, Veddegjærde followed 50 patients that had struggled with health anxiety for a long time. Each received 16 hours of CBT from the well-known Norwegian therapist, Professor Ingvard Wilhelmsen at UiB. The patients answered questionnaires about their quality of life before, during and after the treatment. Other therapies and their drug use was taken into account.

“We know that CBT is an effective treatment against health anxiety, but we did not know how long the effect would last. This study shows that the treatment maintains its positive effect over a long time period,” Veddegjærde says.

Veddegjærde is hoping that the results from the study will lead to more psychologists and psychiatrist offering CBT in the future. From her own practice, she has experienced that only a couple of hours is enough for some patients.”Since the study shows that only a few hours with CBT has positive effects until 10 years after treatment, I hope that more regular GPs and specialists will start offering this type of treatment,” says Kari-Elise Veddegjærde.

Facts:Health anxiety and cognitive behavioral therapy

  • Health anxiety (hypochondria) is characterized by an ongoing belief that one has a serious disease or is going to have one. Often, the patient focuses on one specific disease. The most commonare cancer, heart disease and neurological disease.
  • The patients use a great deal of their time checking for disease symptoms.
  • Approximately 3 percent of the persons visiting their GP suffer from hypochondria in Norway.
  • The aim of cognitive behavioral therapy (CBT) is to make patients more attentive to their unconscious mindset, by confronting the patients with questions. When the patients become more aware of their own thought patterns, it becomes possible to change them.

Explore furtherManaging negative thoughts helps combat depression in Parkinson’s patients


More information: Kari-Elise Frøystad Veddegjærde et al. Long-term effect of cognitive–behavioural therapy in patients with Hypochondriacal Disorder, BJPsych Open (2020). DOI: 10.1192/bjo.2020.22Journal information:British Journal of PsychiatryProvided by University of Bergen

https://www.madinamerica.com/2020/06/opposing-corruption-psychiatric-science-necessary-rights-based-approach/

Opposing Corruption in Psychiatric Science a Human Rights Imperative

Researchers Lisa Cosgrove and Allen Shaughnessy argue that “commercialized science” is incompatible with a human rights approach to mental health care.

Micah Ingle, MA

By Micah Ingle, MAJune 26, 202061197FacebookTwitterEmailPrintFriendlyRediff MyPage

A recent article published in Health and Human Rights explores how the relationship between the pharmaceutical industry and psychiatry undermines a human rights-based approach to mental health care. The authors, Lisa Cosgrove and Allen F. Shaughnessy, argue that “commercialized science,” with ties to the pharmaceutical industry, leads to individualized interventions that are profitable for companies and impedes social and structural changes that lead toward social justice. They propose moving toward a “moral” understanding of human suffering, rather than an economic one.

“The hegemony of the medical model and the over-reliance on organized psychiatry as the main policymaker has undermined the development of mental health policy ‘as a robust cross-sectoral issue.’ As a result, there has been an over-emphasis on biomedical interventions aimed at the individual rather than at population-based health promotion, even though the latter is just as important as individual health treatment,” Cosgrove and Shaughnessy write.
“The focus on biomedical interventions is particularly disconcerting because of the ways in which industry influence has compromised the scientific evidence base in medicine.”

In recent years, researchers, service-users, and disability advocates have argued for shifting toward a human rights-based approach to mental health care, critiquing, in particular, the reliance on “coercion” and “overmedicalization” in psychiatry.

The argument made by United Nations Special Rapporteur, Dainius Pūras, is that conventional psychiatry too often favors these individualistic medical approaches, while failing to account for social determinants of mental health, such as poverty, discrimination, and violence.

As author Lisa Cosgrove—in collaboration with Mad in America’s own Robert Whitaker—has documented “institutional corruption” in psychiatry plays a role in viewing and treating these systemic problems as individual problems.

The current article explores how “commercialized science” undermines a human rights-based approach to mental health care. Cosgrove and Shaughnessy argue that there is significant bias and economic conflict of interest in psychiatric science, medical education, and clinical practice, with unethical ties to the pharmaceutical industry. They discuss the extent of this bias and propose a “moral framework” for understanding human suffering instead.

Cosgrove and Shaughnessy state that unethical ties between academia and the medical industry have resulted in “staggering” corruption at various levels, including “prescribing practices, medical education, guideline recommendations, and editorial decisions,” as well as research evidence.

The authors discuss four dimensions of commercial bias in the research, practice, and education of psychiatry: 1) psychiatric taxonomy, 2) psychotropic drug trials, 3) clinical care guidelines, and 4) medical education.

They argue that the way the Diagnostic and Statistical Manual of Mental Disorders (DSM) was set up, beginning with the turn away from psychoanalysis to medical psychiatry with the third edition, encouraged a rationale of “a pill for every ill.” This was due to the DSM-III’s emphasis on quantifiable, symptom checklist-based diagnoses, mimicking conventional medicine.

The authors clarify that it was not the intention of the American Psychiatric Association to create a diagnostic system that would lend itself to pharmaceutical treatment, but they cite DSM-III chair Robert Spitzer as saying “[t]he pharmaceuticals were delighted” at the new diagnostic taxonomy. Cosgrove and Shaughnessy explain:

“The fact that the majority of DSM IV and DSM V panel members had financial ties to the manufacturers of psychotropic medications used to treat the disorders described in the manual has raised concerns about industry exerting an undue influence on it.”

Second, Cosgrove and Shaughnessy discuss the relationship between medical science and industry. For example, research has found that industry-sponsored studies, unsurprisingly, tend to support their products, creating what is known as “sponsorship bias.”

In psychiatric research, pharmaceutical studies with reported conflicts of interest were nearly five times as likely to report positive results. Phase III randomized psychotropic drug trials with industry funding “consistently results in the publication of pro-industry findings, overestimation of efficacy, and underreporting of harms.”

Clinical care guideline development is another area where conflicts of interest show up. 90% of the authors behind three major American Psychiatric Association clinical guidelines—for major depressive disorder, bipolar disorder, and schizophrenia—had financial ties to the companies that created the drugs mentioned in these guides’ recommended treatments. Other research cited by the authors shows similar unethical attachments.

Finally, Cosgrove and Shaughnessy point to medical education as under the sway of industry interests as well. This ranges from medical students being provided with “meals to gifts to books or study aids” by pharmaceutical companies, to commercial support of continuing medical education credits (CME) for psychiatric practitioners. According to the authors: “almost three-fourths of the top 500 providers of CME receive commercial support.”

These industry-funded CME programs have been criticized for “containing marketing messages that are neither balanced nor accurate.” Despite calls from the National Academy of Medicine to end the relationship between industry and CME, little has changed.

Opposing bureaucratic and technocratic solutions, Cosgrove and Shaughnessy argue for a moral solution to these issues. They suggest several possibilities, such as:

  • Including the perspectives of service users with lived experience of psychological distress in developing “policies, programs, and standards of care.”
  • Challenging institutional stigmatizing of service users to avoid “benevolent othering.”
  • Emphasizing psychosocial “population-based health” rather than exclusively “intra-individual” treatments.
  • Looking at power asymmetries—a shift “from talking about chemical imbalances to addressing power imbalances.”

The authors conclude:

“What are the conditions for the possibility of a robust human rights approach to mental health? While that question eludes easy answers, a necessary starting point is recognizing that the precarious epistemological foundations of psychiatry allow the mental health field to be manipulated by industry.
Therefore, although it is clear that many people throughout the world are not getting the health care they need and deserve, it is also evident that the uncritical exportation of the biomedical disease model will not provide optimally effective mental health interventions at either the individual or population level.”

****

Cosgrove, L. & Shaughnessy, A. F. (2020). Mental health as a basic human right and the interference of commercialized science. Health and Human Rights, 22(1), 61-68. (Link)

https://scitechdaily.com/food-science-baking-self-healing-bread-and-brewing-probiotic-beer/

Food Science: Baking Self-Healing Bread and Brewing Probiotic Beer

TOPICS:AlcoholFood ScienceITMO University

By ITMO UNIVERSITY JUNE 27, 2020

Production of bread at ITMO University. Credit: ITMO University

Scientists from ITMO University’s School of Biotechnology and Cryogenic Systems actively focus on making everyday foods better, safer and more accessible. By replacing just one ingredient in a bread formulation, the researchers managed to make this product more resistant to microorganisms that cause it to spoil. Now, they are experimenting with replacing the same ingredient in beer. If this goes well, the drink will be less strong and will acquire probiotic properties. ITMO.NEWS contacted the scientists to find out how beer’s properties depend on yeast, and also how bread can fall victim to rope spoilage.

Everyone who has ever cooked even a single dish knows that replacing one ingredient can lead to significant changes in its taste, color, and other characteristics. This is especially pertinent for products made from a small number of ingredients. 

How can modern science aid bread making or brewing? People have been baking bread and brewing beer for millennia, and it would seem that it would be extremely hard to discover something fundamentally new in these fields. But this isn’t so: the methods for making these age-old products are constantly being perfected to make them cheaper, more stable in terms of storage, and at the same time tastier.

Saccharomyces cerevisiae.

One of the most important components of bread is yeast – a natural leavening agent that imparts bread its fluffy texture and distinctive aroma. The yeast that’s used in baking is of the Saccharomyces cerevisiae kind. The same microorganisms are what helps wort to ferment, making it into beer.

Elena Soboleva. Credit: Elena Soboleva

But what if yeast, while keeping these functions, would also be able to give products new properties? Is it possible to pick a yeast strain that would prevent bread from spoiling prematurely? ITMO University researchers have suggested that a related group of microorganisms – Saccharomyces cerevisiae var. boulardii – could in fact be this multifunctional analogue to the classical baker’s yeast. 

“This is a tropical yeast strain discovered by French microbiologist Henri Boulard in the beginning of 1920s,” explains Elena Soboleva, an associate professor at the School of School of Biotechnology and Cryogenic Systems. “During his travels across mainland Southeast Asia, he was studying the cholera epidemic and saw that the locals would treat themselves with fruit. Later, having examined the matter in more detail, he discovered that it wasn’t so much about fruits as it was about microorganisms living on their surface. This led him to identify a previously unknown species of yeast, which he called Saccharomyces boulardii and patented as a remedy for diarrhea.”

However, with the development of phylogenetic analyses, it was found that Saccharomyces boulardii is a variation of the species Saccharomyces cerevisiae.

Today, this type of yeast is used as a probiotic to restore the gut microbiome. What’s more, according to the scientists, recent research has shown that this yeast has antioxidant properties and allows the body to better absorb useful microelements and B vitamins.

Self-healing bread

One of the problems encountered by bakers is the so-called, caused by Bacillus licheniformis and Bacillus subtilis types of bacteria. These microorganisms can get into seeds from the soil, and make their way into bread unimpeded. It’s very difficult to detect these bacterial spores even under the most thorough quality control. 

“Sometimes you can hear the opinion that baking in the oven makes bread sterile, but in reality it isn’t so,” says Elena Soboleva. “For instance, while the vegetative cells of the rope bacilli die during baking, spores manage to survive heat treatment. On their own, they’re largely innocuous, but under certain conditions (for example, under high humidity and temperature) bread can develop the so-called rope disease. The bread starts to give off an unpleasant fruity odor similar to the scent of rotting melon, and, as the disease progresses, the bread crumb becomes sticky and stretchy. Of course, such bread cannot be eaten.”

There are many methods to prevent rope disease from developing – ranging from chemical to physical and biological. For example, the rope bacilli don’t tolerate acidic environments, which is why starter cultures are widely used in bread production. However, it was discovered that Saccharomyces boulardii yeast can also be used to protect bread from Bacillus licheniformis spores.

The effect of the yeast strain on the development of rope spoilage when applied: S. cerevisiae on the left, S. cerevisiae var. boulardii on the right. Credit: Artyom Morozov

The research established that these show antagonistic activity against B. subtilis and B. licheniformis bacteria, and have a bacteriostatic effect on rope disease pathogens.

“This yeast is able to produce antibiotic substances that inhibit the activity of spores of rope bacilli, all while working like regular yeast,” points out Elena Soboleva. “We’ve conducted several experiments using different bread production technologies. To guarantee the development of rope disease, favorable conditions – humid and warm environment – were created after the bread had been baked. Using the yeast allows us to slow down the development of the disease up to three days, which is significant for such a product.”

Probiotic beer

But the scientists’ experiments didn’t stop on bread. If Saccharomyces boulardii is used as a probiotic, why not impart this property to beer?

Experimental laboratory for the production of fermented drinks. Credit: ITMO University

ITMO University has an experimental laboratory on the production of fermented drinks. It is here that the plans for the creation of unfiltered beer using this yeast are developed and implemented. 

“We expect the beer to acquire probiotic and antioxidant properties; what’s more, the development of the S. boulardii yeast will allow us to produce beer with lower ethanol content,” shares Artyom Morozov, a PhD student at the Faculty of Food Biotechnologies and Engineering.

The works are planned to start after the restrictions associated with the COVID-19 pandemic have been lifted.

Fundamental aspect of the work

However, the work on the possible applications of Saccharomyces boulardii in the food industry is just one of many stages. For this research to be introduced into the market, it’s necessary to not only show the advantages of using this yeast strain, but also propose an effective way for their cultivation.

Supplies for fermented drinks. Credit: ITMO University

This is due to the fact that one of the special features of the Saccharomyces boulardii metabolism is the optimal growth temperature of 37°C, as well as resistance to low acidity (pH 3.2-4). For the classical baker’s yeast, these values amount to around 30°C and pH 4-6 respectively. The scientists are currently working towards creating optimal conditions that will favor the production of the maximum amount of yeast with minimal costs.

https://www.theglobeandmail.com/arts/article-exploring-the-science-behind-our-artistic-preferences/

Pretty Ugly: New book explores the science behind our personal tastes

KATE TAYLORPUBLISHED 8 HOURS AGOUPDATED JUNE 27, 20200 COMMENTSSHARE00:00Voice1x

Imagine that you are an early human walking through the jungle and a tiger is lurking nearby. To survive, you would need an ear that could distinguish the sound waves released by the tiger’s footfall from all the other noises around you. Or an eye that could catch sight of the slightest spot of orange among the many green leaves. You would have to recognize patterns and be alert to changes in them.

Today, you may be using those same abilities to enjoy harmonies in music or admire visual effects in art. That is the conclusion of McMaster University psychologist Daphne Maurer and Toronto writer Charles Maurer, who have spent three decades establishing a scientific basis for aesthetics. Their recent book, Pretty Ugly: Why we like some songs, faces, foods, plays, pictures, poems, etc. and dislike others, begins with evolutionary biology and goes on to use neuroscience, developmental psychology, physics, mathematics, anthropology, musicology and art history to establish the mechanisms behind our cultural tastes. The tastes are subjective, to be sure, but the way they are established is not.

“To survive it’s very important to quickly recognize what we are encountering, whether we are looking at a log floating in the river or a crocodile,” Daphne said in a recent interview, adding it is more efficient for the brain to learn patterns and recognize breaks in them, than to store information about every single encounter. “Out of that actually follow our aesthetic preferences. When we can easily recognize what we have encountered and it’s not dangerous, it brings you pleasure, like mama’s comfort food for the eyes and ears.”

As a species, humans have developed the ability to quickly recognize what we’re encountering, whether that’s a log floating in the river or a crocodile, says psychologist Daphne Maurer, explaining how we learn to appreciate patterns.CHARLES MAURER/HANDOUT

Daphne is a professor of psychology at McMaster in Hamilton, Ont., who has specialized in the development of vision, and although she has retired from teaching, her research is still coming to fruition. In 1988, she and Charles, a science writer and her husband, published The World of the Newborn, a book that explored babies’ first sensations and their growing understanding of their surroundings.

“We thought it would be nice to write a sequel but all adult perception seemed a bit much,” Charles said. “So we decided to limit it to the small area of aesthetics.”

“My bread and butter work is on perception, and how experience shapes our perception,” Daphne added. “We were curious if that perspective could help explain the arts. … People have written books on the psychology of art, the psychology of photography, the psychology of music, about flavour, but they were all separate systems. And we knew, from work on the brain, that once the information gets past about the first three synapses, it’s all processed in the same way. So we thought there must be some general principles.”

They have been working to establish those principles ever since.

One important building block in their argument is how we hear harmonics, the series of sound waves that appear everywhere from nature to music, from a tree root vibrating as we tread on it to an animal’s vocalizations to a piano being played. Encountering imperfect harmonics in nature again and again, our brain is gradually sensitized to a statistical average of all of them – or an ideal harmonic. The Maurers point out that our neural pathways are so appreciative of those patterns, we can fill them in without noticing: A standard radio can’t reproduce the low notes of a double bass or an organ yet if we listen to a concert broadcast, our brain lets us hear the full range of the instruments.

This process also applies to waves of light – that is, to our vision, letting us read impressionistic brushstrokes as a realistic image, for example.

Because we appreciate patterns, we like the familiar, the symmetrical and the average. The Maurers cite experiments where volunteers rate average faces (determined by large sets of measurements) as more attractive than ones that deviate widely from the norm. Meanwhile the patterns of fractals, geometric figures where each smaller part has the same structure as the whole, show up everywhere from landscape art to abstract paintings by Jackson Pollock.

To illustrate the way we prefer averages over extremes in their book, Charles and Daphne Maurer morphed together four portraits.

“Just as fractals make the painting easier to perceive [for the viewer], the artist will be subject to that same phenomenon while painting. ‘Okay it works better if I go over there,’ ” Daphne said. “Fractal structure may make the painting more pleasing to the artist for reasons the artist doesn’t understand.”

Yet if we find patterns pleasing, we also like a bit of excitement, a bit of a break in the familiar.

“Comfort foods can become boring; we don’t want to eat them every day. We do seek safe novelty, which will be a deviation from what we expect.” The surveys of attractive faces, for example, show the volunteers appreciate slight improvements over the average, but don’t like any extreme. Meanwhile, the movement from familiar to novel and back again explains the flow of fashion, how we adopt a new hem-length or waistline and, once it has become familiar, view previous fashions as out-of-date and unattractive.

“Whether the deviation signals ugliness or danger, something negative or signals excitement and joy will really depend on our past experience,” Daphne said.

So, even though the Maurers’ work is scientific, it does not imply that there are objective standards of aesthetics; instead, negative or positive responses to sights, sounds and tastes are formed by social circumstances and personal experience. Positive experiences reinforce themselves, so the rock fan seeks out more rock concerts while the opera buff prefers opera.

This would seem to leave little room for critical judgement of art; critics might assess context or technique, but whether art is pleasing would be an entirely subjective response. To make that point, the Maurers cite a 2003 study of 29 New Zealand wine experts asked to taste an oaked chardonnay and name two of its qualities. Apart from 10 instances of “oak or oaky” there is little overlap in the adjectives they picked; on the 58 word-list, 34 adjectives are unique, describing tastes only one expert experienced.

Oh well, at least the wine tasters did seem to agree unanimously when they were offered something wretched, but the larger point is neatly summarized by title of the Maurers’ book. What’s pretty to me may be ugly to you and vice versa.

Pretty Ugly is available from Cambridge Scholars Publishing.

https://www.psychologytoday.com/us/blog/talking-apes/202006/cognitive-decline-precedes-physical-decline-in-older-adults

Cognitive Decline Precedes Physical Decline in Older Adults

For seniors, a sound mind makes for a sound body.

Posted Jun 22, 2020

Experts generally agree that older adults who remain physically active tend to also remain mentally sharp far into their senior years. This is generally explained in terms of cognitive reserve, which is viewed as a sort of shield against the onslaught of dementia in old age. Habitual physical activity, social engagement, and mental exertion are all believed to confer cognitive reserve.

While this view of cognitive reserve certainly has intuitive appeal, there’s a fundamental problem with the way in which we tend to think of the relationship between psychological and physical health in old age. Specifically, we know that seniors who are physically, socially, and mentally active show few signs of cognitive decay, so we recommend that all seniors engage in more of these activities to avoid dementia. In particular, we often assume that maintaining physical activity actually causes older adults to remain cognitively sharp.

However, to date, no such study has demonstrated this purported causal relationship between physical activity and cognitive reserve. Rather, all studies so far have been correlational in nature, meaning that they’ve established that physical activity and cognitive reserve are related. But they haven’t demonstrated that the former actually causes the latter.

While the assertion that physical activity confers cognitive reserve has intuitive appeal, our intuitions often deceive us, especially in the realm of science. In fact, it could very well be that cognitive decline precedes physical decline in older adults. This is exactly the hypothesis that University of Geneva (Switzerland) psychologist Boris Cheval and colleagues tested in a study recently published in the journal Health Psychology.

The researchers start off with the observation that humans are by nature lazy. We all cut corners and minimize exertions. Nobody ever really has a burning desire to run five miles on the treadmill or lift weights for 45 minutes. Rather, it takes quite a bit of willpower to stick to an exercise regimen, especially in modern society where there are so many sedentary activities that are far more appealing.

Yet, if this assertion is true, then it means that we must have cognitive reserve first in order to push ourselves to engage in physical activities we would have no natural inclination to engage in otherwise.  To test this hypothesis, Cheval and colleagues analyzed data from the Survey of Health, Ageing, and Retirement in Europe (SHARE), in which over 100,000 adults from 50 to 90 years of age were measured for levels of cognitive resources and physical activity on five separate occasions over 11 years, between 2004 and 2015. This repeated-measures data set allowed the researchers to analyze which came first—cognitive or physical decline.

Physical activity was measured with a single question, “How often do you engage in activities that require a low or moderate level of energy such as gardening, cleaning the car, or doing a walk?” Respondents indicated their level of physical activity, ranging from “more than once a week” to “hardly ever, or never.”

Cognitive resources were measured in three ways. First, participants engaged in a delayed recall task. That is, they heard a list of ten words and then were asked to repeat back as many as they could remember. This is a standard measure of short-term memory, which is known to be an important aspect of the ability to stay focused on long-term goals. In other words, to accomplish a desired task you need to be able to keep it in mind.

Second, the participants were given a verbal fluency test. For example, they were asked to name as many different animals as possible in 60 seconds. An early sign of dementia is a disruption in vocabulary, whereby patients often struggle for the words they want to use. The verbal fluency test provides an indication of whether the participant is at risk of dementia.

Third, each participant indicated their ultimate level of educational attainment, whether that be high school, college, or graduate school. Ample research has shown that education provides considerable cognitive reserve against dementia in old age.

Across the five measurements over the course of eleven years, the researchers were able to discern a clear pattern. Namely, a decline in cognitive resources preceded a decline in physical activities. This means that physical activity doesn’t necessarily keep you mentally healthy, but rather it’s the reverse.article continues after advertisement

Older adults who were psychologically fit also tended to keep themselves physically fit. But those who showed declines in cognitive abilities at one point in time also showed reductions in physical activity at later points in time. In other words, those older adults who engaged in challenging mental activities to keep themselves cognitively sharp also had the willpower to keep themselves physically active and thus in good bodily health.

Life is complicated, of course, and the causal relationship between cognitive resources and physical activity isn’t straightforward either. Because the researchers had five measurements for each participant, they could detect long-term patterns in the data. While a decline in cognitive resources at one time predicted a decline in physical activity at the next time, the researchers also detected a reciprocal relationship when they looked at the times after that.

That is to say, cognitive decline comes first, but a subsequent decline in physical activity also leads to further declines in cognitive resources later. In short, these people fall into a vicious cycle in which a reduction in cognitive activity leads to a reduction of physical activity, which further impedes cognitive activity, and so on.

The take-home from this study is clear. If you want to avoid dementia in old age, you need to maintain ample cognitive reserve. You can do this by engaging in mentally challenging tasks, including negotiating complex social relationships. You can also be a life-long learner, thus gaining the benefits that education confers on cognitive reserve. And as a mentally fit senior citizen, you’ll also understand the need to remain physically active for your body’s sake—and have the willpower to stick to a reasonable exercise regimen.

References

Cheval, B. et al. (2020) Relationship between decline in cognitive resources and physical activity. Health Psychology, 39, 519-528.