https://thenextweb.com/artificial-intelligence/2020/06/29/artificial-neural-networks-are-more-similar-to-the-brain-than-they-get-credit-for-syndication/

Research: Artificial neural networks are more similar to the brain than we thought

by BEN DICKSON — 17 hours ago in ARTIFICIAL INTELLIGENCEResearch: Artificial neural networks are more similar to the brain than we thoughtCredit: Pxfuel

  • 4SHARES

This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.

Consider the animal in the following image. If you recognize it, a quick series of neuron activations in your brain will link its image to its name and other information you know about it (habitat, size, diet, lifespan, etc…). But if like me, you’ve never seen this animal before, your mind is now racing through your repertoire of animal species, comparing tails, ears, paws, noses, snouts, and everything else to determine which bucket this odd creature belongs to. Your biological neural network is reprocessing your past experience to deal with a novel situation.

Large Indian civet
(source: Wikipedia)

TNW Couch ConferencesJoin industry leaders to define new strategies for an uncertain futureREGISTER NOW

Our brains, honed through millions of years of evolution, are very efficient processing machines, sorting out the ton of information we receive through our sensory inputs, associating known items with their respective categories.

That picture, by the way, is an Indian civet, an endangered species that has nothing to do with cats, dogs, and rodents. It should be placed in its own separate category (viverrids).  There you go. You now have a new bucket to place civets in, which includes this variant that was sighted recently in India.

While we have yet to learn much about how the mind works, we are in the midst (or maybe still at the beginning) of an era of creating our own version of the human brain. After decades of research and development, researchers have managed to create deep neural networks that sometimes match or surpass human performance in specific tasks.

[Read: Artificial vs augmented intelligence: what’s the difference?]

But one of the recurring themes in discussions about artificial intelligence is whether artificial neural networks used in deep learning work similarly to the biological neural networks of our brains. Many scientists agree that artificial neural networks are a very rough imitation of the brain’s structure, and some believe that ANNs are statistical inference engines that do not mirror the many functions of the brain. The brain, they believe, contains many wonders that go beyond the mere connection of biological neurons.

A paper recently published in the peer-reviewed journal Neuron challenges the conventional view of the functions of the human brain. Titled “Direct Fit to Nature: An Evolutionary Perspective on Biological and Artificial Neural Networks,” the paper discusses that contrary to the beliefs of many scientists, the human brain is a brute-force big data processor that fits its parameters to the many examples that it experiences. That’s the kind of description usually given to deep neural networks.

Authored by researchers at Princeton University, the thought-provoking paper provides a different perspective on neural networks, analogies between ANNs and their biological counterparts, and future directions for creating more capable artificial intelligence systems.

AI’s interpretability challenge

brain gears

Neuroscientists generally believe that the complex functionalities of the brain can be broken down into simple, interpretable models.

For instance, I can explain the complex mental process of my analysis of the civet picture (before I knew its name, of course), as such: “It’s definitely not a bird because it doesn’t have feathers and wings. And it certainly isn’t a fish. It’s probably a mammal, given the furry coat. It could be a cat, given the pointy ears, but the neck is a bit too long and the body shape a bit weird. The snout is a bit rodent-like, but the legs are longer than most rodents…” and finally I would come to the conclusion that it’s probably an esoteric species of cat. (In my defense, it is a very distant relative of felines if you insist.)

Artificial neural networks, however, are often dismissed as uninterpretable black boxes. They do not provide rich explanations of their decision process. This is especially true when it comes to the complex deep neural networks that are composed of hundreds (or thousands of layers) and millions (or billions) or parameters.

During their training phase, deep neural networks review millions of images and their associated labels, and then they mindlessly tune their millions of parameters to the patterns they extract from those images. These tuned parameters then allow them to determine which class a new image belongs to. They don’t understand the higher-level concepts that I just mentioned (neck, ear, nose, legs, etc.) and only look for consistency between the pixels of an image.

The authors of “Direct Fit to Nature” acknowledge that neural networks—both biological and artificial—can differ considerably in their circuit architecture, learning rules, and objective functions.

“All networks, however, use an iterative optimization process to pursue an objective, given their input or environment—a process we refer to as ‘direct fit,’” the researchers write. The term “direct fit” is inspired from the blind fitting process observed in evolution, an elegant but mindless optimization process where different organisms adapt to their environment through a series of random genetic transformations carried out over a very long period.

“This framework undercuts the assumptions of traditional experimental approaches and makes unexpected contact with long-standing debates in developmental and ecological psychology,” the authors write.

Another problem that the artificial intelligence community faces is the tradeoff between interpretability and generalization. Scientists and researchers are constantly searching for new techniques and structures that can generalize AI capabilities across vaster domains. And experience has shown that, when it comes to artificial neural networks, scale improves generalization. Advances in processing hardware and the availability of large compute resources have enabled researchers to create and train very large neural networks in reasonable timeframes. And these networks have proven to be remarkably better at performing complex tasks such as computer vision and natural language processing.

The problem with artificial neural networks, however, is that the larger they get, the more opaque they become. With their logic spread across millions of parameters, they become much harder to interpret than a simple regression model that assigns a single coefficient to each feature. Simplifying the structure of artificial neural networks (e.g., reducing the number of layers or variables) will make it easier to interpret how they map different input features to their outcomes. But simpler models are also less capable in dealing with the complex and messy data found in nature.

“We argue that neural computation is grounded in brute-force direct fitting, which relies on over-parameterized optimization algorithms to increase predictive power (generalization) without explicitly modeling the underlying generative structure of the world,” the authors of “Direct Fit to Nature” write.

AI’s generalization problem

Say you want to create an AI system that detects chairs in images and videos. Ideally, you would provide the algorithm with a few images of chairs, and it would be able to detect all types of normal as well as wacky and funky ones.

weird chairs
Are these chairs?

This is one of the long-sought goals of artificial intelligence, creating models that can “extrapolate” well. This means that, given a few examples of a problem domain, the model should be able to extract the underlying rules and apply them to a vast range of novel examples it hasn’t seen before.

When dealing with simple (mostly artificial) problem domains, it might be possible to reach extrapolation level by tuning a deep neural network to a small set of training data. For instance, such levels of generalization might be achievable in domains with limited features such as sales forecasting and inventory management. (But as we’ve seen in these pages, even these simple AI models might fall apart when a fundamental change comes to their environment.)

But when it comes to messy and unstructured data such as images and text, small data approaches tend to fail. In images, every pixel effectively becomes a variable, so analyzing a set of 100×100 pixel images becomes a problem with 10,000 dimensions, each having thousands or millions of possibilities.

“In cases in which there are complex nonlinearities and interactions among variables at different parts of the parameter space, extrapolation from such limited data is bound to fail,” the Princeton researchers write.

The human brain, many cognitive scientists believe, can rely on implicit generative rules without being exposed to rich data from the environment. Artificial neural networks, on the other hand, do not have such capabilities, the popular belief is. This is the belief that the authors of “Direct Fit to Nature” challenge.

Direct fitting neural networks to the problem domain

“Dense sampling of the problem space can flip the problem of prediction on its head, turning an extrapolation-based problem into an interpolation-based problem,” the researchers note.

In essence, with enough samples, you will be able to capture a large enough area of the problem domain. This makes it possible to interpolate between samples with simple computations without the need to extract abstract rules to predict the outcome of situations that fall outside the domain of the training examples.

“When the data structure is complex and multidimensional, a ‘mindless’ direct-fit model, capable of interpolation-based prediction within a real-world parameter space, is preferable to a traditional ideal-fit explicit model that fails to explain much variance in the data,” the authors of “Direct Fit to Nature” write.

interpolation vs extrapolation
Extrapolation (left) tries to extract rules from big data and apply them to the entire problem space. Interpolation (right) relies on rich sampling of the problem space to calculate the spaces between samples.

In tandem with advances in computing hardware, the availability of very large data sets has enabled the creation of direct-fit artificial neural networks in the past decade. The internet is rich with all sorts of data from various domains. Scientists create vast deep learning data sets from Wikipedia, social media networks, image repositories, and more. The advent of the internet of things (IoT) has also enabled rich sampling from physical environments (roads, buildings, weather, bodies, etc.).

In many types of applications (i.e., supervised learning algorithms), the gathered data still requires a lot of manual labor to associate each sample with its outcome. But nonetheless, the availability of big data has made it possible to apply the direct-fit approach to complex domains that can’t be represented with few samples and general rules.

One argument against this approach is the “long tail” problem, often described as “edge cases.” For instance, in image classifications, one of the outstanding problems is that popular training data sets such as ImageNet provides millions of pictures of different types of objects. But since most of the pictures were taken under ideal lighting conditions and from conventional angles, deep neural networks trained on these datasets fail to recognize those objects in rare positions.

ImageNet images vs ObjectNet images
ImageNet vs reality: In ImageNet (left column) objects are neatly positioned, in ideal background and lighting conditions. In the real world, things are messier (source: objectnet.dev)

“The long tail does not pertain to new examples per se, but to low-frequency or odd examples (e.g. a strange view of a chair, or a chair shaped like an unrelated object) or riding in a new context (like driving in a blizzard or with a flat tire),” co-authors of the paper Uri Hasson, Professor at Department of Psychology and Princeton Neuroscience Institute, and Sam Nastase, Postdoctoral researcher at Princeton Neuroscience Institute, told TechTalks in written comments. “Note that biological organisms, including people, like ANNs, are bad at extrapolating to contexts they never experienced; e.g. many people fail spectacularly when driving in snow for the first time.”

Many developers try to make their deep learning models more robust by blindly adding more samples to the training data set, hoping to cover all possible situations. This usually doesn’t solve the problem, because the sampling techniques don’t widen the distribution of the data set, and edge cases remain uncovered by the easily collected data samples. The solution, Hasson and Nastase argue, is to expand the interpolation zone by providing a more ecological, embodied sampling regime for artificial neural networks that currently perform poorly in the tail of the distribution.

“For example, many of the oddities in classical human visual psychophysics are trivially resolved by allowing the observer to simply move and actively sample the environment (something essentially all biological organisms do),” they say. “That is, the long-tail phenomenon is in part a sampling deficiency. However, the solution isn’t necessarily just more samples (which will in large part come from the body of the distribution), but will instead require more sophisticated sampling observed in biological organisms (e.g. novelty seeking).”

This observation is in line with recent research that shows employing a more diverse sampling methodology can in fact improve the performance of computer vision systems.

In fact, the need for sampling from the long tail also applies to the human brain. For instance, consider one of the oft-mentioned criticisms against self-driving cars which posits that their abilities are limited to the environments they’ve been trained in.

“Even the most experienced drivers can find themselves in a new context where they are not sure how to act. The point is to not train a foolproof car, but a self-driving car that can drive, like humans, in 99 percent of the contexts. Given the diversity of driving contexts, this is not easy, but perhaps doable,” Hasson and Nastase say. “We often overestimate the generalization capacity of biological neural networks, including humans. But most biological neural networks are fairly brittle; consider for example that raising ocean temperatures 2 degrees will wreak havoc on entire ecosystems.”

Challenging old beliefs

thinking human statue

Many scientists criticize AI systems that rely on very large neural networks, arguing that the human brain is very resource-efficient. The brain is a three-pound mass of matter that uses little over 10 watts of electricity. Deep neural networks, however, often require very large servers that can consume megawatts of power.

But hardware aside, comparing the components of the brain to artificial neural networks paints a different picture. The largest deep neural networks are composed of a few billion parameters. The human brain, in contrast, is constituted of approximately 1,000 trillion synapses, the biological equivalent of ANN parameters. Moreover, the brain is a highly parallel system, which makes it very hard to compare its functionality to that of ANNs.

“Although the brain is certainly subject to wiring and metabolic constraints, we should not commit to an argument for scarcity of computational resources as long as we poorly understand the computational machinery in question,” the Princeton researchers write in their paper.

Another argument is that, in contrast to ANNs, the biological neural network of the human brain has very poor input mechanisms and doesn’t have the capacity to ingest and process very large amounts of data. This makes it inevitable for human brains to learn new tasks without learning the underlying rules.

To be fair, calculating the input entering the brain is complicated. But we often underestimate the huge amount of data that we process. “For example, we may be exposed to thousands of visual exemplars of many daily categories a year, and each category may be sampled at thousands of views in each encounter, resulting in a rich training set for the visual system. Similarly, with regard to language, studies estimate that a child is exposed to several million words per year,” the authors of the paper write.

Beyond System 1 neural networks

One thing that can’t be denied, however, is that humans do in fact extract rules from their environment and develop abstract thoughts and concepts that they use to process and analyze new information. This complex symbol manipulation enables humans to compare and draw analogies between different tasks and perform efficient transfer learning. Understanding and applying causality remain among the unique features of the human brain.

“It is certainly the case that humans can learn abstract rules and extrapolate to new contexts in a way that exceeds modern ANNs. Calculus is perhaps the best example of learning to apply rules across different contexts. Discovering natural laws in physics is another example, where you learn a very general rule from a set of limited observations,” Hasson and Nastase say.

These are the kind of capabilities that emerge not from the activations and interactions of a single neural network but are the result of the accumulated knowledge across many minds and generations.

This is one area that direct-fit models fall short, Hasson and Nastase acknowledge. Scientifically, it is called System 1 and System 2 thinking. System 1 refers to the kind of tasks that can be learned by rote, such as recognizing faces, walking, running, driving. You can perform most of these capabilities subconsciously, while also performing some other task (e.g., walking and talking to someone else at the same time, driving and listening to the radio). System 2, however, requires concentration and conscious thinking (can you solve a differential equation while jogging?).

“In the paper, we distinguish fast and automatic System 1 capacities from the slow and deliberate cognitive functions,” Hasson and Nastase say. “While direct fit allows the brain to be competent while being blind to the solution it learned (similar to all evolved functional solutions in biology), and while it explains the ability of System 1 to learn to perceive and act across many contexts, it still doesn’t fully explain a subset of human functions attributed to System 2 which seems to gain some explicit understanding of the underlying structure of the world.”

random vectors

So what do we need to develop AI algorithms that have System 2 capabilities? This is one area where there’s much debate in the research community. Some scientists, including deep learning pioneer Yoshua Bengio, believe that pure neural network-based systems will eventually lead to System 2 level AI. New research in the field shows that advanced neural network structures manifest the kind of symbol manipulation capabilities that were previously thought to be off-limits for deep learning.

In “Direct Fit to Nature,” the authors support the pure neural network–based approach. In their paper, they write: “Although the human mind inspires us to touch the stars, it is grounded in the mindless billions of direct-fit parameters of System 1. Therefore, direct-fit interpolation is not the end goal but rather the starting point for understanding the architecture of higher-order cognition. There is no other substrate from which System 2 could arise.”

An alternative view is the creation of hybrid systems that incorporate classic symbolic AI with neural networks. The area has drawn much attention in the past year, and there are several projects that show that rule-based AI and neural networks can complement each other to create systems that are stronger than the sum of their parts.

“Although non-neural symbolic computing—in the vein of von Neumann’s model of a control unit and arithmetic logic units—is useful in its own right and may be relevant at some level of description, the human System 2 is a product of biological evolution and emerges from neural networks,” Hasson and Nastase wrote in their comments to TechTalks.

In their paper, Hasson and Nastase expand on some of the possible components that might develop higher capabilities for neural networks. One interesting suggestion is providing a physical body for neural networks to experience and explore the world like other living beings.

“Integrating a network into a body that allows it to interact with objects in the world is necessary for facilitating learning in new environments,” Hasson and Nastase said. “Asking a language model to learn the meaning of words from the adjacent words in text corpora exposes the network to a highly restrictive and narrow context. If the network has a body and can interact with objects and people in a way that relates to the words, it is likely to get a better sense of the meaning of words in context. Counterintuitively, imposing these sorts of ‘limitations’ (e.g. a body) on a neural network can force the neural network to learn more useful representations.”

https://www.inverse.com/science/curing-blindness-with-crispr

RESEARCHERS ARE MOVING TOWARD CURING BLINDNESS

ShutterstockHEMANT KHANNA6.28.2020 9:30 PM

In recent months, even as our attention has been focused on the coronavirus outbreak, there has been a slew of scientific breakthroughs in treating diseases that cause blindness.

Researchers at U.S.-based Editas Medicine and Ireland-based Allergan have administered CRISPR for the first time to a person with a genetic disease. This landmark treatment uses the CRISPR approach to a specific mutation in a gene linked to childhood blindness. The mutation affects the functioning of the light-sensing compartment of the eye, called the retina, and leads to loss of the light-sensing cells.

EARN REWARDS & LEARN SOMETHING NEW EVERY DAY.

According to the World Health Organization, at least 2.2 billion people in the world have some form of visual impairment. In the United States, approximately 200,000 people suffer from inherited forms of retinal disease for which there is no cure. But things have started to change for good. We can now see light at the end of the tunnel.

I am an ophthalmology and visual sciences researcher and am particularly interested in these advances because my laboratory is focusing on designing new and improved gene therapy approaches to treat inherited forms of blindness.

THE EYE AS A TESTING GROUND FOR CRISPR — Gene therapy involves inserting the correct copy of a gene into cells that have a mistake in the genetic sequence of that gene, recovering the normal function of the protein in the cell. The eye is an ideal organ for testing new therapeutic approaches, including CRISPR. That is because the eye is the most exposed part of our brain and thus is easily accessible.

The second reason is that retinal tissue in the eye is shielded from the body’s defense mechanism, which would otherwise consider the injected material used in gene therapy as foreign and mount a defensive attack response. Such a response would destroy the benefits associated with the treatment.

In recent years, breakthrough gene therapy studies paved the way to the first-ever Food and Drug Administration-approved gene therapy drug, Luxturna TM, for a devastating childhood blindness disease, Leber congenital amaurosis Type 2.

This form of Leber congenital amaurosis is caused by mutations in a gene that codes for a protein called RPE65. The protein participates in chemical reactions that are needed to detect light. The mutations lessen or eliminate the function of RPE65, which leads to our inability to detect light – blindness.

The treatment method developed simultaneously by groups at the University of Pennsylvania and at University College London and Moorefields Eye Hospital involved inserting a healthy copy of the mutated gene directly into the space between the retina and the retinal pigmented epithelium, the tissue located behind the retina where the chemical reactions take place. This gene helped the retinal pigmented epithelium cell produce the missing protein that is dysfunctional in patients.

Although the treated eyes showed vision improvement, as measured by the patient’s ability to navigate an obstacle course at differing light levels, it is not a permanent fix. This is due to the lack of technologies that can fix the mutated genetic code in the DNA of the cells of the patient.

A NEW TECHNOLOGY TO ERASE THE MUTATION — Lately, scientists have been developing a powerful new tool that is shifting biology and genetic engineering into the next phase. This breakthrough gene-editing technology, which is called CRISPR, enables researchers to directly edit the genetic code of cells in the eye and correct the mutation causing the disease.

Children suffering from the disease Leber congenital amaurosis Type 10 endure progressive vision loss beginning as early as one year old. This specific form of Leber congenital amaurosis is caused by a change to the DNA that affects the ability of the gene – called CEP290 – to make the complete protein. The loss of the CEP290 protein affects the survival and function of our light-sensing cells, called photoreceptors.

One treatment strategy is to deliver the full form of the CEP290 gene using a virus as the delivery vehicle. But the CEP290 gene is too big to be cargo for viruses. So another approach was needed. One strategy was to fix the mutation by using CRISPR.

The scientists at Editas Medicine first showed safety and proof of the concept of the CRISPR strategy in cells extracted from a patient skin biopsy and in nonhuman primate animals.

These studies led to the formulation of the first-ever in human CRISPR gene therapy clinical trial. This Phase 1 and Phase 2 trial will eventually assess the safety and efficacy of the CRISPR therapy in 18 Leber congenital amaurosis Type 10 patients. The patients receive a dose of the therapy while under anesthesia when the retina surgeon uses a scope, needle, and syringe to inject the CRISPR enzyme and nucleic acids into the back of the eye near the photoreceptors.

To make sure that the experiment is working and safe for the patients, the clinical trial has recruited people with late-stage disease and no hope of recovering their vision. The doctors are also injecting the CRISPR editing tools into only one eye.

A NEW CEP290 GENE THERAPY STRATEGY — An ongoing project in my laboratory focuses on designing a gene therapy approach for the same gene CEP290. Contrary to the CRISPR approach, which can target only a specific mutation at one time, my team is developing an approach that would work for all CEP290 mutations in Leber congenital amaurosis Type 10.

This approach involves using shorter yet functional forms of the CEP290 protein that can be delivered to the photoreceptors using the viruses approved for clinical use.

Gene therapy that involves CRISPR promises a permanent fix and a significantly reduced recovery period. A downside of the CRISPR approach is the possibility of an off-target effect in which another region of the cell’s DNA is edited, which could cause undesirable side effects, such as cancer. However, new and improved strategies have made such a likelihood very low.

Although the CRISPR study is for a specific mutation in CEP290, I believe the use of CRISPR technology in the body to be exciting and a giant leap. I know this treatment is in an early phase, but it shows clear promise. In my mind, as well as the minds of many other scientists, CRISPR-mediated therapeutic innovation absolutely holds immense promise.

An infrared image of a man and a dog. German and Swiss researchers have shown that they can endow living mice with this type of vision.Joseph Giacomin/Getty Images

MORE WAYS TO TACKLE BLINDNESS — In another study just reported in the journal Science, German and Swiss scientists have developed a revolutionary technology, which enables mice and human retinas to detect infrared radiation. This ability could be useful for patients suffering from a loss of photoreceptors and sight.

The researchers demonstrated this approach, inspired by the ability of snakes and bats to see heat, by endowing mice and postmortem human retinas with a protein that becomes active in response to heat. Infrared light is light emitted by warm objects that is beyond the visible spectrum.

The heat warms a specially engineered gold particle that the researchers introduced into the retina. This particle binds to the protein and helps it convert the heat signal into electrical signals that are then sent to the brain.

In the future, more research is needed to tweak the ability of the infrared-sensitive proteins to different wavelengths of light that will also enhance the remaining vision.

This approach is still being tested in animals and in retinal tissue in the lab. But all approaches suggest that it might be possible to either restore, enhance, or provide patients with forms of vision used by other species.

https://www.popsci.com/story/science/numbers-look-like-spaghetti/

Solving the medical mystery of a brain that sees numbers as spaghetti

The condition could help us better understand perception.By Hannah Seo15 hours ago

A diagram of what patient RFS sees when shown the number 8.
For patient RFS (identified by his initials), numbers appear as random squiggles and swirls.Johns Hopkins University

The patient known as RFS looks at a number, but all he sees is “spaghetti.”

Show him a picture of one circle hovering above another, and he sees two circles. But as soon as the circles get close enough to look like an eight—spaghetti.

RFS developed corticobasal syndrome in 2010 at the age of 60, a rare progressive degenerative condition that affects less than one in 100,000 people per year and corrupts parts of the cortex and basal ganglia in the brain. After about a year of headaches, and flashes of vision loss and amnesia, RFS started having muscle tremors, difficulty walking, and—perhaps most strangely—the inability to see numbers. Experts have dubbed his number confusion “digit metamorphopsia,” and hope his condition could lead to a better understanding of human perception.

“Digit blindness isn’t quite accurate,” says Teresa Schubert, a neuropsychologist at Harvard and one of the lead authors of the new PNAS paper documenting RFS’s case. Blindness implies that he is seeing the number normally but just can’t recognize it—when actually every number he sees looks like a random assortment of tangled lines, like “a plate of spaghetti,” she says. In the paper, the researchers describe RFS holding a foam figure eight and saying that the shape is “too strange for words.

Not only are numbers distorted, but the distortions change randomly each time. The “spaghetti” of an eight in one instance would look completely unrecognizable from the “spaghetti” of an eight in another.

“It’s just a mess every time. And that rules out any possibility of teaching him to recognize that this ‘spaghetti’ is a four and this is an eight,” Schubert says. “We also discovered that if other shapes are too close to a digit, they get absorbed into this warping, this spaghetti that he sees.”

So in the case of the two stacked circles, slowly coming together until they look like an eight, the visual distortion occurs as soon as RFS’s brain starts to categorize what he was seeing as a number.

But not all numbers are scrambled equally. Numbers in word form and Roman numerals all register normally with RFS. His condition only seems to warp common numerals, and even then there’s some discrepancy: zeroes and ones were completely spared, whereas two through nine are now completely unrecognizable.

So what’s going on? Schubert and her team have a few hypotheses. “It might be that zeroes and ones have very simple shapes with multiple interpretations,” she says. A zero could potentially be a circle or the letter ‘o,’ while a one could be just a line or an ‘I’ or an ‘l.’

An alternate hypothesis is that zeroes and ones were saved because they both play a unique role in how we understand numbers and values.

“Zero as a concept wasn’t actually invented until many years after people started using digits,” Schubert says, and so when we think of things like the value of nothing, or different orders of magnitude, the digits zero and one have special roles in representing quantities. “It may be that those special roles protected them and preserved them when digits two through nine were getting damaged.”

But why numbers? Schubert says it’s probably random: “it could just as easily have been letters.” She explains that there are specialized areas in the brain that process things that people have to deal with regularly, like faces, digits, letters, and so on—these areas are all candidates for these conditions where whole categories can be knocked out.

The team also wanted to see if RFS could process other information in pictures of numbers. They showed him a large image of a number, with a word or image of a face somewhere inside the number, then asked RFS what he could see while monitoring his brain’s electrical activity on an EEG. RFS had no idea what he was looking at—it was all spaghetti. But the monitors showed that his brain detected the presence of a word or face even if he himself was not aware of it.

“This is a really unique case that disrupts our intuitions about the way we think we see things,” says Schubert. “When most of us think of ‘seeing,’ we think that an image comes through our eyes and is processed by the brain, and then, sight. But what this case is really showing us is that your brain can be unconsciously detecting a face or a digit or even reading a word without you actually ‘seeing’ it.”

So there are possibly many more stages to awareness and being conscious of something than we previously thought.

Scientists have suspected that your brain can identify something while leaving your conscious awareness out of the loop for a while, but it’s hard to experimentally prove, says Schubert. RFS’s case is incredibly strong evidence to back up this idea, and it will have huge consequences for research on perception. Any future theories that scientists come up with to explain how our minds become alert to information will have to reconcile with RFS’s case as an extra layer of criteria.

Today, RFS’s daily routine has needed some adjusting, to say the least. Digits are everywhere in modern life: clocks, books, recipes, prices, to name a few. To help RFS adapt with his digit metamorphopsia, Schubert’s colleague, Michael McCloskey, another author of the paper, came up with a new collection of symbols to represent numbers. RFS learned them quickly and has been using them for nine years now. Schubert and her team even enlisted the help of an engineering student to create a new calculator app with these new number symbols, and software for his laptop that converts digits on web pages. Of course, there are limitations. Static documents like books, PDFs, or the dials on the hood of a car are all still inaccessible to RFS.

To be clear, his understanding of numbers and the concepts of mathematics are all intact. In fact, up until a few years ago, he was still a working engineering geologist, and his mental arithmetic is still excellent. And while much of the numerical information in the world will remain out of his reach, RFS can live life more or less as normal—and he is mostly in wonder at his own condition.

https://www.neowin.net/news/firefox-780/

Firefox 78.0

By Razvan Serea · Jun 29, 2020 17:04 EDT0

Firefox

Firefox is a fast, full-featured Web browser. It offers great security, privacy, and protection against viruses, spyware, malware, and it can also easily block pop-up windows. The key features that have made Firefox so popular are the simple and effective UI, browser speed and strong security capabilities.

Firefox has complete features for browsing the Internet. It is very reliable and flexible due to its implemented security features, along with customization options. Firefox includes pop-up blocking, tab-browsing, integrated Google search, simplified privacy controls, a streamlined browser window that shows you more of the page than any other browser and a number of additional features that work with you to help you get the most out of your time online.

https://medicalxpress.com/news/2020-06-artificial-intelligence-seizures-real-time.html

Artificial intelligence identifies, locates seizures in real-time

by Brandie Jefferson, Washington University in St. Louis

Artificial intelligence identifies, locates seizures in real-time
This gif was recorded during two seizures, one at 2950 seconds, the other at 9200. The top left animation is of EEG signals from three electrodes. The top right is a map of the inferred network. The third animation plots the Fiedler eigenvalue, the single value used to detect seizures using the network inference technique. Credit: Li Lab

Researchers from Washington University in St. Louis’ McKelvey School of Engineering have combined artificial intelligence with systems theory to develop a more efficient way to detect and accurately identify an epileptic seizure in real-time.

Their results were published May 26 in the journal Scientific Reports.

The research comes from the lab of Jr-Shin Li, professor in the Preston M. Green Department of Electrical & Systems Engineering, and was headed by Walter Bomela, a postdoctoral fellow in Li’s lab.

Also on the research team were Shuo Wang, a former student of Li’s and now assistant professor at the University of Texas at Arlington, and Chu-An Chou of Northeastern University.

“Our technique allows us to get raw data, process it and extract a feature that’s more informative for the machine learning model to use,” Bomela said. “The major advantage of our approach is to fuse signals from 23 electrodes to one parameter that can be efficiently processed with much less computing resources.”

In brain science, the current understanding of most seizures is that they occur when normal brain activity is interrupted by a strong, sudden hyper-synchronized firing of a cluster of neurons. During a seizure, if a person is hooked up to an electroencephalograph—a device known as an EEG that measures electrical output—the abnormal brain activity is presented as amplified spike-and-wave discharges.

“But the seizure detection accuracy is not that good when temporal EEG signals are used,” Bomela said. The team developed a network inference technique to facilitate detection of a seizure and pinpoint  its location with improved accuracy.

During an EEG session, a person has electrodes attached to different spots on his/her head, each recording electrical activity around that spot.

“We treated EEG electrodes as nodes of a network. Using the recordings (time-series data) from each node, we developed a data-driven approach to infer time-varying connections in the network or relationships between nodes,” Bomela said. Instead of looking solely at the EEG data—the peaks and strengths of individual signals—the network technique considers relationships. “We want to infer how a brain region is interacting with others,” he said.

It is the sum of these relationships that form the network.

Once you have a network, you can measure its parameters holistically. For instance, instead of measuring the strength of a single signal, the overall network can be evaluated for strength. There is one parameter, called the Fiedler eigenvalue, which is of particular use. “When a seizure happens, you will see this parameter start to increase,” Bomela said.

And in network theory, the Fiedler eigenvalue is also related to a network’s synchronicity—the bigger the value the more the network is synchronous. “This agrees with the theory that during seizure, the brain activity is synchronized,” Bomela said.

A bias toward synchronization also helps eliminate artifact and background noise. If a person, for instance, scratches their arm, the associated brain activity will be captured on some EEG electrodes or channels. It will not, however, be synchronized with seizure activity. In that way, this network structure inherently reduces the importance of unrelated signals; only brain activities that are in sync will cause a significant increase of the Fiedler eigenvalue.

Currently this technique works for an individual patient. The next step is to integrate machine learning to generalize the technique for identifying different types of seizures across patients.

The idea is to take advantage of various parameters characterizing the network and use them as features to train the machine learning algorithm.

Bomela likens the way this will work to facial recognition software, which measures different features—eyes, lips and so on—generalizing from those examples to recognize any face.

“The network is like a face,” he said. “You can extract different parameters from an individual’s network—such as the clustering coefficient or closeness centrality—to help machine learning differentiate between different seizures.”

That’s because in network theory, similarities in specific parameters are associated with specific networks. In this case, those networks will correspond to different types of seizures.

One day, a person with a seizure disorder can wear a device analogous to an insulin pump. As the neurons begin to synchronize, the device will deliver medication or electrical interference to stop the seizure in its tracks.

Before this can happen, researchers need a better understanding of the neural network.

“While the ultimate goal is to refine the technique for clinical use, right now we are focused on developing methods to identify seizures as drastic changes in brain activity,” Li said. “These changes are captured by treating the brain as a network in our current method.”


Explore furtherUsing network science to help pinpoint source of seizures


More information: Walter Bomela et al, Real-time Inference and Detection of Disruptive EEG Networks for Epileptic Seizures, Scientific Reports (2020). DOI: 10.1038/s41598-020-65401-6Journal information:Scientific ReportsProvided by Washington University in St. Louis

https://alsnewstoday.com/2020/06/29/research-aims-help-advanced-als-patients-communicate-more-easily/

Research Aims to Help Advanced ALS Patients Communicate More Easily

BY MARISA WEXLER, MS

Research Aims to Help Advanced ALS Patients Communicate More Easily

Click here to subscribe to the ALS News Today Newsletter! 3.8 (8)

A research project underway at the University of Rhode Island (URI) aims to help people with advanced amyotrophic lateral sclerosis (ALS) and other severely limiting disorders communicate better by interacting with a computer that can translate their thoughts.

Among the consequences of motor diseases like ALS is the progressive loss of fine motor skills necessary for communication. Technologies known as brain-computer interface (BCI) exist that are able to analyze and interpret brain signals of people with limited muscle control, helping them to communicate.

Conceptually, BCI works just as its name implies: a computer takes readings from the brain, and turns these readings into some form of communication.

Existing BCI systems, however, have notable limitations. For instance, they often require a person to have fine control over their eyes, and eye-gaze control diminishes in the latter stages of ALS. There also can be considerable day-to-day variation in a person’s brain activity over time, limiting the effectiveness of a BCI system over a longer period.

The project, funded by a three-year, roughly $250,000 grant from the National Science Foundation, aims to overcome some of these limitations by developing personalized algorithms that take into account changes in brain activity over time, in order to make the BCI more robust.

Day-to-day changes in brain activity “are speculated to be associated with several factors, including cognitive fluctuations and environmental factors,” Yalda Shahriari, PhD, a professor of engineering at URI and the project’s lead researcher, said in a university news story. “Developing personalized algorithms will enable us to predict these fluctuations and optimize performance based on each patient’s specifications and needs.”

The project incorporates two types of technologies to measure brain activity. The first, called electroencephalogram (EEG), uses electrodes to measure electrical signals in the brain. The second, functional Near Infrared Spectroscopy (fNIRS), uses near-infrared beams of light to measure the amount of oxygen in different regions of the brain; more oxygen generally indicates better blood flow and, therefore, greater brain activity.

While EEG is typically a part of brain-computer interface technologies, fNIRS is not.  Project researchers believe that combining the two will give their brain-computer interface more information to work with, allowing for more fine-tuned communication based on brain states.

fNIRS may also be particularly informative for people with limited ability to move their eyes or control eye movement for extended periods.

“We will use a hybrid of EEG and fNIRS signals to compensate for each neuroimaging modality shortage and use the complementary features obtained from each modality to improve our system,” Shahriari said.

The process of developing the refined brain-computer interface involves the use of an oddball paradigm — a psychological framework where the brain gets a set of repetitive stimuli (e.g., seeing several copies of the same picture) and then a different stimulus (e.g., a different picture). The brain’s response to the different stimulus is recorded. The specific setup involves a grid of letters and numbers and intermittent flashes of a matrix of digits, requiring participants to do some mental math.

“By giving the patient higher demanding tasks to focus on, we can trigger several cognitive functions and extract the associated signatures or neural biomarkers,” Shahriari said. “The computer can then decode the pattern of neural activities that appear after the patient performs the tasks. The patterns can be used for diagnostic and communication purposes.”

Shahriari is currently working with the National Center for Adaptive Neurotechnologies, the Rhode Island Chapter of the ALS Association, and Rhode Island Hospital to add more participants to the study.

“Our analysis of the data becomes much more powerful if we can significantly increase the number of patients in the study,” Shahriari said.

Doug Sawyer, a study participant who was diagnosed with ALS 11 years ago, said: “Taking part in the brain activity study has been very rewarding. I enjoy learning new things and staying abreast of the latest technology. Dr. Shahriari and her team have been willing to share their progress. They make me feel as if I’m part of their team and not just a test number.”

Sawyer, 57, works as a design engineer and communicates with his office using eye movement, but finds that his gaze weakens as he tires.

In addition to the research itself, the URI project also aims to help educate students — from those in elementary school to those seeking advanced degrees — about the field of BCI technology. For example, a curriculum called “Engineering the Brain” is being developed for middle school students interested in BCI technology.

Further aims include providing training to women and other groups under-represented in research today.Marisa Wexler, MSMarisa holds an MS in Cellular and Molecular Pathology from the University of Pittsburgh, where she studied novel genetic drivers of ovarian cancer. She specializes in cancer biology, immunology, and genetics. Marisa began working with BioNews in 2018, and has written about science and health for SelfHacked and the Genetics Society of America. She also writes/composes musicals and coaches the University of Pittsburgh fencing club.

https://medicalxpress.com/news/2020-06-words.html

Study finds out why some words may be more memorable than others

by National Institutes of Health

NIH study finds out why some words may be more memorable than others
NIH study suggests our brains may use search engine strategies to remember words and memories of our past experiences. Credit: Zaghloul lab, NIH/NINDS.

Thousands of words, big and small, are crammed inside our memory banks just waiting to be swiftly withdrawn and strung into sentences. In a recent study of epilepsy patients and healthy volunteers, National Institutes of Health researchers found that our brains may withdraw some common words, like “pig,” “tank,” and “door,” much more often than others, including “cat,” “street,” and “stair.” By combining memory tests, brain wave recordings, and surveys of billions of words published in books, news articles and internet encyclopedia pages, the researchers not only showed how our brains may recall words but also memories of our past experiences.

“We found that some words are much more memorable than others. Our results support the idea that our memories are wired into neural networks and that our brains search for these memories, just the way search engines track down information on the internet,” said Weizhen (Zane) Xie, Ph.D., a cognitive psychologist and post-doctoral fellow at the NIH’s National Institute of Neurological Disorders and Stroke (NINDS), who led the study published in Nature Human Behaviour. “We hope that these results can be used as a roadmap to evaluate the health of a person’s memory and brain.”

Dr. Xie and his colleagues first spotted these words when they re-analyzed the results of memory tests taken by 30 epilepsy patients who were part of a clinical trial led by Kareem Zaghloul, M.D., Ph.D., a neurosurgeon and senior investigator at NINDS. Dr. Zaghloul’s team tries to help patients whose seizures cannot be controlled by drugs, otherwise known as intractable epilepsy. During the observation period, patients spend several days at the NIH’s Clinical Center with surgically implanted electrodes designed to detect changes in brain activity.

“Our goal is to find and eliminate the source of these harmful and debilitating seizures,” said Dr. Zaghloul. “The monitoring period also provides a rare opportunity to record the neural activity that controls other parts of our lives. With the help of these patient volunteers we have been able to uncover some of the blueprints behind our memories.”

The memory tests were originally designed to assess episodic memories, or the associations—the who, what, where and how details—we make with our past experiences. Alzheimer’s disease and other forms of dementia often destroys the brain’s capacity to make these memories.

Patients were shown pairs of words, such as “hand” and “apple,” from a list of 300 common nouns. A few seconds later they were shown one of the words, for instance “hand,” and asked to remember its pair, “apple.” Dr. Zaghloul’s team had used these tests to study how neural circuits in the brain store and replay memories.

When Dr. Xie and his colleagues re-examined the test results, they found that patients successfully recalled some words more often than others, regardless of the way the words were paired. In fact, of the 300 words used, the top five were on average about seven times more likely to be successfully recalled than the bottom five.

At first, Dr. Zaghloul and the team were surprised by the results and even a bit skeptical. For many years scientists have thought that successful recall of a paired word meant that a person’s brain made a strong connection between the two words during learning and that a similar process may explain why some experiences are more memorable than others. Also, it was hard to explain why words like “tank,” “doll,” and “pond” were remembered more often than frequently used words like “street,” “couch,” and “cloud.”

But any doubts were quickly diminished when the team saw very similar results after 2,623 healthy volunteers took an online version of the word pair test that the team posted on the crowdsourcing website Amazon Mechanical Turk.

“We saw that some things—in this case, words—may be inherently easier for our brains to recall than others,” said Dr. Zaghloul. “These results also provide the strongest evidence to date that what we discovered about how the brain controls memory in this set of patients may also be true for people outside of the study.”

Dr. Xie got the idea for the study at a Christmas party which he attended shortly after his arrival at NIH about two years ago. After spending many years studying how our mental states—our moods, our sleeping habits, and our familiarity with something—can change our memories, Dr. Xie joined Dr. Zaghloul’s team to learn more about the inner-workings of the brain.

“Our memories play a fundamental role in who we are and how our brains work. However, one of the biggest challenges of studying memory is that people often remember the same things in different ways, making it difficult for researchers to compare people’s performances on memory tests,” said Dr. Xie. “For over a century, researchers have called for a unified accounting of this variability. If we can predict what people should remember in advance and understand how our brains do this, then we might be able to develop better ways to evaluate someone’s overall brain health.”

At the party, he met Wilma Bainbridge, Ph.D., an assistant professor in the department of psychology at the University of Chicago, who, at the time was working as a post-doctoral fellow at the NIH’s National Institute of Mental Health (NIMH). She was trying to tackle this same issue by studying whether some things we see are more memorable than others.

For example, in one set of studies of more than 1000 healthy volunteers, Dr. Bainbridge and her colleagues found that some faces are more memorable than others. In these experiments, each volunteer was shown a steady stream of faces and asked to indicate when they recognized one from earlier in the stream.

“Our exciting finding is that there are some images of people or places that are inherently memorable for all people, even though we have each seen different things in our lives,” said Dr. Bainbridge. “And if image memorability is so powerful, this means we can know in advance what people are likely to remember or forget.”

Nevertheless, these results were limited to understanding how our brains work when we recognize something we see. At the party, Drs. Xie and Bainbridge wondered whether this idea could be applied to the recall of memories that Dr. Zaghloul’s team had been studying and if so, what would that tell us about how the brain remembers our past experiences?

In this paper, Dr. Xie proposed that the principles from an established theory, known as the Search for Associative Memory (SAM) model, may help explain their initial findings with the epilepsy patients and the healthy controls.

“We thought one way to understand the results of the word pair tests was to apply network theories for how the brain remembers past experiences. In this case, memories of the words we used look like internet or airport terminal maps, with the more memorable words appearing as big, highly trafficked spots connected to smaller spots representing the less memorable words,” said Dr. Xie. “The key to fully understanding this was to figure out what connects the words.”

To address this, the researchers wrote a novel computer modeling program that tested whether certain rules for defining how words are connected can predict the memorability results they saw in the study. The rules were based on language studies which had scanned thousands of sentences from books, news articles, and Wikipedia pages.

Initially, they found that seemingly straightforward ideas for connecting words could not explain their results. For instance, the more memorable words did not simply appear more often in sentences than the less memorable ones. Similarly, they could not find a link between the relative “concreteness” of a word’s definition and its memorability. A word like “moth” was no more memorable than a word that has more abstract meanings, like “chief.”

Instead, their results suggested that the more memorable words were more semantically similar, or more often linked to the meanings of other words used in the English language. This meant, that when the researchers plugged semantic similarity data into the computer model it correctly guessed which words that were memorable from patients and healthy volunteer test. In contrast, this did not happen when they used data on word frequency or concreteness.

Further results supported the idea that the more memorable words represented high trafficked hubs in the brain’s memory networks. The epilepsy patients correctly recalled the memorable words faster than others. Meanwhile, electrical recordings of the patients’ anterior temporal lobe, a language center, showed that their brains replayed the neural signatures behind those words earlier than the less memorable ones. The researchers saw this trend when they looked at both averages of all results and individual trials, which strongly suggested that the more memorable words are easier for the brain to find.

Moreover, both the patients and the healthy volunteers mistakenly called out the more memorable words more frequently than any other words. Overall, these results supported previous studies which suggested that the brain may visit or pass through these highly connected memories, like the way animals forage for food or a computer searches the internet.

“You know when you type words into a search engine, and it shows you a list of highly relevant guesses? It feels like the search engine is reading your mind. Well, our results suggest that the brains of the subjects in this study did something similar when they tried to recall a paired word, and we think that this may happen when we remember many of our past experiences,” said Dr. Xie. “Our results also suggest that the structure of the English language is stored in everyone’s brains and we hope that, one day, it is used to overcome the variability doctors face when trying to evaluate the health of a person’s memory and brain.”

The team is currently exploring ways to incorporate their results and computer model into the development of memory tests for Alzheimer’s disease and other forms of dementia.


Explore furtherStudy suggests our brains use distinct firing patterns to store and replay memories


More information: Memorability of words in arbitrary verbal associations modulates memory retrieval in the anterior temporal lobe, Nature Human Behaviour (2020). DOI: 10.1038/s41562-020-0901-2 , www.nature.com/articles/s41562-020-0901-2Journal information:Nature Human BehaviourProvided by National Institutes of Health

https://medicalxpress.com/news/2020-06-day-good-brain.html

A drink or two a day might be good for your brain, study says

by E.j. Mundell, Healthday Reporter

A drink or two a day might be good for your brain: study

Love a glass of wine with dinner? There’s good news for you from a study that finds “moderate” alcohol consumption—a glass or two per day—might actually preserve your memory and thinking skills.

This held true for both men and women, the researchers said.

There was one caveat, however: The study of nearly 20,000 Americans tracked for an average of nine years found that the brain benefit from alcohol mostly applied to white people, not Black people. The reasons for that remain unclear, according to a team led by Changwei Li, an epidemiologist at the University of Georgia College of Public Health, in Athens.

Among whites, however, low to moderate drinking “was significantly associated with a consistently high cognitive function trajectory and a lower rate of cognitive decline,” compared to people who never drank, the team reported June 29 in JAMA Network Open.

The study couldn’t prove that moderate drinking directly caused the preservation of thinking and memory, only that there was an association.

The range of drinking considered “low to moderate” in the study was set at less than eight drinks per week for women and less than 15 drinks per week for men. Drink more frequently, and any benefit to the brain begins to fade and even turn into possible harm, the researchers stressed.

Also, although the tests administered to the study participants measured cognitive attributes such as memory (word recall), overall mental status (tests of knowledge, language) and vocabulary knowledge, they were not designed to gauge whether alcohol could shield people from Alzheimer’s or other dementias.

Still, the finding that moderate drinking does no harm to thinking skills, and may even provide a benefit, “could be good news for some of the alcohol-consuming public, which makes up the majority of Americans according to the National Survey on Drug Use and Health,” said geriatric psychiatrist Dr. Jeremy Koppel. He’s an associate professor at the Feinstein Institutes for Medical Research, in Manhasset, N.Y.

But there are always downsides to drinking, including its effects on the heart, Koppel added.

“As the study authors note, the benefits of potentially enhanced cognitive performance in alcohol-consuming middle-aged Americans must be weighed against the risks of hypertension and stroke, amongst other maladies, that this exposure may confer,” said Koppel, who wasn’t involved in the new study.

The research used comprehensive data from an ongoing federal government health study involving almost 20,000 people tracked for an average of nine years between 1996 and 2008. The participants averaged about 62 years of age at the beginning of the study and 60% were women.

Li’s team noted that the “findings are in line with previous research.” Those prior studies include a major study of Californians that found that moderate alcohol consumption was tied to better cognitive function among folks averaging about 73 years of age. And data from the ongoing Nurses’ Health Study found that drinking that didn’t exceed more than a drink per day seemed linked to a slowing of cognitive decline for women in their 70s.

None of this means that Americans can go out and raise multiple glasses of booze to good health, however, because problem drinking is a major cause of suffering across the United States. In that regard, “public health campaigns are still needed to further reduce alcohol drinking in middle-aged or older U.S. adults, particularly among men,” Li’s group said.


Explore furtherEven ‘low-risk’ drinking can be harmful


More information: Ruiyuan Zhang et al. Association of Low to Moderate Alcohol Drinking With Cognitive Functions From Middle to Older Age Among US Adults, JAMA Network Open (2020). DOI: 10.1001/jamanetworkopen.2020.7922Journal information:JAMA Network Open

Copyright © 2020 HealthDay. All rights reserved.

https://hbr.org/podcast/2020/06/how-the-cult-of-sleep-deprivation-affects-work-and-mental-health

How the Cult of Sleep-Deprivation Affects Work and Mental Health

Go Back 15 secondsGo Foward 15 seconds0:00 | 34:23Play

Many high-powered jobs require people to work long hours and give up sleep, but that can harm your mental health – and your career.

June 29, 2020

Sleep is incredibly important for both physical and mental health. But many high-powered jobs, which require people to work long hours, operate under the false assumption that people who sacrifice sleep in order to work are more productive and more successful.

For people who suffer from anxiety and depression, lack of sleep can also create downward spirals that make those issues worse. Sleep researcher Christopher Barnes, an associate professor of management at the Foster School of Business at the University of Washington, explains how sleep deprivation can affect your mental health – and your career.

HBR Presents is a network of podcasts curated by HBR editors, bringing you the best business ideas from the leading minds in management. The views and opinions expressed are solely those of the authors and do not necessarily reflect the official policy or position of Harvard Business Review or its affiliates.

https://phys.org/news/2020-06-cartwheeling-reveals-optical-phenomenon.html

Cartwheeling light reveals new optical phenomenon

by Mike Williams, Rice University

Cartwheeling light reveals new optical phenomenon
A model by Rice University scientists shows how two positively charged spheres attached to springs are attracted to the electric field of light. Due to the motion of the spheres, the spring system scatters light at different energies when irradiated with clockwise and anticlockwise trochoidal waves. Credit: Link Research Group/Rice University

A scientist might want to do cartwheels upon making a discovery, but this time the discovery itself relies on cartwheels.

Researchers at Rice University have discovered details about a novel type of polarized light-matter interaction with light that literally turns end over end as it propagates from a source. Their find could help study molecules like those in light-harvesting antennas anticipated to have unique sensitivity to the phenomenon.

The researchers observed the effect they call trochoidal dichroism in the light scattered by two coupled dipole-scatterers, in this case a pair of closely-spaced plasmonic metal nanorods, when they were excited by the cartwheeling light.

The light polarization the researchers used is fundamentally different from the linear polarization that makes sunglasses work and corkscrew-like circularly polarized light used in circular dichroism to study the conformation of proteins and other small molecules.

Instead of taking a helical form, the field of light is flat as it cartwheels—rotating either clockwise or anticlockwise—away from the source like a rolling hula hoop. This type of light polarization, called trochoidal polarization, has been observed previously, said Rice graduate student and lead author Lauren McCarthy, but nobody knew that plasmonic nanoparticles could be used to see how it rolled.

“Now we know how trochoidal polarizations relate to existing light-matter interactions,” she said. “There’s a difference between understanding the light and its physical properties and understanding light’s influence on matter. The differential interaction with matter, based on the material’s geometry, is the new piece here.”

The discovery by the Rice lab of chemist Stephan Link is detailed in the Proceedings of the National Academy of Sciences.

Cartwheeling light reveals new optical phenomenon
Rice University graduate student Lauren McCarthy led an effort that discovered details about a novel type of polarized-light matter interaction with light that literally turns end over end as it propagates from a source. Credit: Jeff Fitlow/Rice University

The researchers weren’t looking specifically for trochoidal dichroism. They were generating an evanescent field in a technique they developed to study chiral gold nanoparticles to see how spatially-confined, left- and right-handed circularly polarized light interacted with matter. Freely propagating circularly polarized light interactions are key to several technologies, including 3-D glasses made of materials that discriminate between opposite light polarizations, but are not as well-understood when light is confined to small spaces at interfaces.

Instead of the circularly polarized light used before, the authors changed the incident light polarization used in order to generate an evanescent field with cartwheeling waves. The researchers found that the clockwise and anticlockwise trochoidal polarizations interacted differently with pairs of plasmonic nanorods oriented at 90 degrees from each other. Specifically, the wavelengths of light the nanorod pairs scattered changed when the trochoidal polarization changed from clockwise to anticlockwise, which is a characteristic of dichroism.

“Trochoidal waves have been discussed, and different groups have probed their properties and applications,” McCarthy said. “However, as far as we know, no one’s observed that a material’s geometry can enable differential interactions with anticlockwise versus clockwise trochoidal waves.”

Molecules interact with light through their electric and magnetic dipoles. The researchers noted that molecules with electric and magnetic dipoles that are perpendicular to each other, as with the 90-degree nanoparticles, have charge motion that rotates in-plane when excited. Trochoidal dichroism could be used to determine the direction of this rotation, which would reveal molecular orientation.

Exciting self-assembled gold nanorod dimers also revealed subtle trochoidal dichroism effects, showing the phenomenon isn’t limited to strictly fabricated nanoparticles arranged at 90 degrees.

“Having worked with polarized light interacting with plasmonic nanostructures for a long time now, the current discovery is certainly special in several ways,” Link said. “Finding a new form of polarized-light matter interaction is exciting by itself. Equally rewarding was the process of the discovery, though, as Lauren and my former student, Kyle Smith, pushed me to keep up with their results. At the end it was real team effort by all co-authors of which I am very proud.”


Explore furtherUsing mass spectrometry to isolate guanine-rich DNA ions


More information: Lauren A. McCarthy el al., “Evanescent waves with trochoidal polarizations reveal a dichroism,” PNAS (2020). www.pnas.org/cgi/doi/10.1073/pnas.2004169117Journal information:Proceedings of the National Academy of SciencesProvided by Rice University