https://www.quantamagazine.org/how-the-brain-creates-a-timeline-of-the-past-20190212/

How the Brain Creates a Timeline of the Past

The brain can’t directly encode the passage of time, but recent work hints at a workaround for putting timestamps on memories of events

Theoretical and experimental studies are piecing together how the brain creates a temporal context for ordering memories — a kind of trailing timeline that gets blurrier for events receding into the past.

Ashley Mackenzie for Quanta Magazine

It began about a decade ago at Syracuse University, with a set of equations scrawled on a blackboard. Marc Howard, a cognitive neuroscientist now at Boston University, and Karthik Shankar, who was then one of his postdoctoral students, wanted to figure out a mathematical model of time processing: a neurologically computable function for representing the past, like a mental canvas onto which the brain could paint memories and perceptions. “Think about how the retina acts as a display that provides all kinds of visual information,” Howard said. “That’s what time is, for memory. And we want our theory to explain how that display works.”

But it’s fairly straightforward to represent a tableau of visual information, like light intensity or brightness, as functions of certain variables, like wavelength, because dedicated receptors in our eyes directly measure those qualities in what we see. The brain has no such receptors for time. “Color or shape perception, that’s much more obvious,” said Masamichi Hayashi, a cognitive neuroscientist at Osaka University in Japan. “But time is such an elusive property.” To encode that, the brain has to do something less direct.

Pinpointing what that looked like at the level of neurons became Howard and Shankar’s goal. Their only hunch going into the project, Howard said, was his “aesthetic sense that there should be a small number of simple, beautiful rules.”

They came up with equations to describe how the brain might in theory encode time indirectly. In their scheme, as sensory neurons fire in response to an unfolding event, the brain maps the temporal component of that activity to some intermediate representation of the experience — a Laplace transform, in mathematical terms. That representation allows the brain to preserve information about the event as a function of some variable it can encode rather than as a function of time (which it can’t). The brain can then map the intermediate representation back into other activity for a temporal experience — an inverse Laplace transform — to reconstruct a compressed record of what happened when.

Photo of Marc Howard

Photo of Karthik Shankar

The cognitive neuroscientists Marc Howard (at left) and Karthik Shankar, now at Boston University, have devoted the better part of the past decade to developing a general mathematical framework for how the brain builds a temporal context for episodic memories.

Cydney Scott for Boston University Photography (Howard); Courtesy of Karthik Shankcar

Just a few months after Howard and Shankar started to flesh out their theory, other scientists independently uncovered neurons, dubbed “time cells,” that were “as close as we can possibly get to having that explicit record of the past,” Howard said. These cells were each tuned to certain points in a span of time, with some firing, say, one second after a stimulus and others after five seconds, essentially bridging time gaps between experiences. Scientists could look at the cells’ activity and determine when a stimulus had been presented, based on which cells had fired. This was the inverse-Laplace-transform part of the researchers’ framework, the approximation of the function of past time. “I thought, oh my god, this stuff on the blackboard, this could be the real thing,” Howard said.

“It was then I knew the brain was going to cooperate,” he added.

Invigorated by empirical support for their theory, he and his colleagues have been working on a broader framework, which they hope to use to unify the brain’s wildly different types of memory, and more: If their equations are implemented by neurons, they could be used to describe not just the encoding of time but also a slew of other properties —  even thought itself.

But that’s a big if. Since the discovery of time cells in 2008, the researchers had seen detailed, confirming evidence of only half of the mathematics involved. The other half — the intermediate representation of time — remained entirely theoretical.

Until last summer.

Orderings and Timestamps

In 2007, a couple of years before Howard and Shankar started tossing around ideas for their framework, Albert Tsao (now a postdoctoral researcher at Stanford University) was an undergraduate student doing an internship at the Kavli Institute for Systems Neuroscience in Norway. He spent the summer in the lab of May-Britt Moser and Edvard Moser, who had recently discovered grid cells — the neurons responsible for spatial navigation — in a brain area called the medial entorhinal cortex. Tsao wondered what its sister structure, the lateral entorhinal cortex, might be doing. Both regions provide major input to the hippocampus, which generates our “episodic” memories of experiences that occur at a particular time in a particular place. If the medial entorhinal cortex was responsible for representing the latter, Tsao reasoned, then maybe the lateral entorhinal cortex harbored a signal of time.

The kind of memory-linked time Tsao wanted to think about is deeply rooted in psychology. For us, time is a sequence of events, a measure of gradually changing content. That explains why we remember recent events better than ones from long ago, and why when a certain memory comes to mind, we tend to recall events that occurred around the same time. But how did that add up to an ordered temporal history, and what neural mechanism enabled it?

Tsao didn’t find anything at first. Even pinning down how to approach the problem was tricky because, technically, everything has some temporal quality to it. He examined the neural activity in the lateral entorhinal cortex of rats as they foraged for food in an enclosure, but he couldn’t make heads or tails of what the data showed. No distinctive time signal seemed to emerge.

Tsao tabled the work, returned to school and for years left the data alone. Later, as a graduate student in the Moser lab, he decided to revisit it, this time trying a statistical analysis of cortical neurons at a population level. That’s when he saw it: a firing pattern that, to him, looked a lot like time.

He, the Mosers and their colleagues set up experiments to test this connection further. In one series of trials, a rat was placed in a box, where it was free to roam and forage for food. The researchers recorded neural activity from the lateral entorhinal cortex and nearby brain regions. After a few minutes, they took the rat out of the box and allowed it to rest, then put it back in. They did this 12 times over about an hour and a half, alternating the colors of the walls (which could be black or white) between trials.

What looked like time-related neural behavior arose mainly in the lateral entorhinal cortex. The firing rates of those neurons abruptly spiked when the rat entered the box. As the seconds and then minutes passed, the activity of the neurons decreased at varying rates. That activity ramped up again at the start of the next trial, when the rat reentered the box. Meanwhile, in some cells, activity declined not only during each trial but throughout the entire experiment; in other cells, it increased throughout.

Based on the combination of these patterns, the researchers — and presumably the rats — could tell the different trials apart (tracing the signals back to certain sessions in the box, as if they were timestamps) and arrange them in order. Hundreds of neurons seemed to be working together to keep track of the order of the trials, and the length of each one.

“You get activity patterns that are not simply bridging delays to hold on to information but are parsing the episodic structure of experiences,” said Matthew Shapiro, a neuroscientist at Albany Medical College in New York who was not involved in the study.

The rats seemed to be using these “events” — changes in context — to get a sense of how much time had gone by. The researchers suspected that the signal might therefore look very different when the experiences weren’t so clearly divided into separate episodes. So they had rats run around a figure-eight track in a series of trials, sometimes in one direction and sometimes the other. During this repetitive task, the lateral entorhinal cortex’s time signals overlapped, likely indicating that the rats couldn’t distinguish one trial from another: They blended together in time. The neurons did, however, seem to be tracking the passage of time within single laps, where enough change occurred from one moment to the next.

Tsao and his colleagues were excited because, they posited, they had begun to tease out a mechanism behind subjective time in the brain, one that allowed memories to be distinctly tagged. “It shows how our perception of time is so elastic,” Shapiro said. “A second can last forever. Days can vanish. It’s this coding by parsing episodes that, to me, makes a very neat explanation for the way we see time. We’re processing things that happen in sequences, and what happens in those sequences can determine the subjective estimate for how much time passes.” The researchers now want to learn just how that happens.

GRAPHIC: TIMESTAMPING MEMORIES

Lucy Reading-Ikkanda/Quanta Magazine

Howard’s mathematics could help with that. When he heard about Tsao’s results, which were presented at a conference in 2017 and published in Nature last August, he was ecstatic: The different rates of decay Tsao had observed in the neural activity were exactly what his theory had predicted should happen in the brain’s intermediate representation of experience. “It looked like a Laplace transform of time,” Howard said — the piece of his and Shankar’s model that had been missing from empirical work.

“It was sort of weird,” Howard said. “We had these equations up on the board for the Laplace transform and the inverse around the same time people were discovering time cells. So we spent the last 10 years seeing the inverse, but we hadn’t seen the actual transform. … Now we’ve got it. I’m pretty stoked.”

“It was exciting,” said Kareem Zaghloul, a neurosurgeon and researcher at the National Institutes of Health in Maryland, “because the data they showed was very consistent with [Howard’s] ideas.” (In work published last month, Zaghloul and his team showed how changes in neural states in the human temporal lobe linked directly to people’s performance on a memory task.)

“There was a nonzero probability that all the work my colleagues and students and I had done was just imaginary. That it was about some set of equations that didn’t exist anywhere in the brain or in the world,” Howard added. “Seeing it there, in the data from someone else’s lab — that was a good day.”

Building Timelines of Past and Future

If Howard’s model is true, then it tells us how we create and maintain a timeline of the past — what he describes as a “trailing comet’s tail” that extends behind us as we go about our lives, getting blurrier and more compressed as it recedes into the past. That timeline could be of use not just to episodic memory in the hippocampus, but to working memory in the prefrontal cortex and conditioning responses in the striatum. These “can be understood as different operations working on the same form of temporal history,” Howard said. Even though the neural mechanisms that allow us to remember an event like our first day of school are different than those that allow us to remember a fact like a phone number or a skill like how to ride a bike, they might rely on this common foundation.

The discovery of time cells in those brain regions (“When you go looking for them, you see them everywhere,” according to Howard) seems to support the idea. So have recent findings — soon to be published by Howard, Elizabeth Buffalo at the University of Washington and other collaborators — that monkeys viewing a series of images show the same kind of temporal activity in their entorhinal cortex that Tsao observed in rats. “It’s exactly what you’d expect: the time since the image was presented,” Howard said.

He suspects that record serves not just memory but cognition as a whole. The same mathematics, he proposes, can help us understand our sense of the future, too: It becomes a matter of translating the functions involved. And that might very well help us make sense of timekeeping as it’s involved in the prediction of events to come (something that itself is based on knowledge obtained from past experiences).

Howard has also started to show that the same equations that the brain could use to represent time could also be applied to space, numerosity (our sense of numbers) and decision-making based on collected evidence — really, to any variable that can be put into the language of these equations. “For me, what’s appealing is that you’ve sort of built a neural currency for thinking,” Howard said. “If you can write out the state of the brain … what tens of millions of neurons are doing … as equations and transformations of equations, that’s thinking.”

He and his colleagues have been working on extending the theory to other domains of cognition. One day, such cognitive models could even lead to a new kind of artificial intelligence built on a different mathematical foundation than that of today’s deep learning methods. Only last month, scientists built a novel neural network model of time perception, which was based solely on measuring and reacting to changes in a visual scene. (The approach, however, focused on the sensory input part of the picture: what was happening on the surface, and not deep down in the memory-related brain regions that Tsao and Howard study.)

But before any application to AI is possible, scientists need to ascertain how the brain itself is achieving this. Tsao acknowledges that there’s still a lot to figure out, including what drives the lateral entorhinal cortex to do what it’s doing and what specifically allows memories to get tagged. But Howard’s theories offer tangible predictions that could help researchers carve out new paths toward answers.

Of course, Howard’s model of how the brain represents time isn’t the only idea out there. Some researchers, for instance, posit chains of neurons, linked by synapses, that fire sequentially. Or it could turn out that a different kind of transform, and not the Laplace transform, is at play.

Those possibilities do not dampen Howard’s enthusiasm. “This could all still be wrong,” he said. “But we’re excited and working hard.”

https://www.androidauthority.com/machine-learning-data-science-deal-951267/

Get certified in machine learning and data science for $41

Machine Learning and Data Science Certification Training Bundle

Using machine learning and big data, scientists are discovering actionable insights at an unforeseen rate. Practically every industry is chomping at the bit for talent in the field, and all it takes to enter this exciting arena is dedication.

This certification bundle will put you in the right direction, with everything from overviews to specialized tips and tricks. Usually $1,599, the bundle is available to Android Authority readers for $41 right now.

Over eight units and 48 hours of Python-focused content, this learning kit will run you through the whole nine yards. A variety of subjects are covered, from Google TensorFlow to specialized stats modeling, to practical data science tools and applications. By the time you’re finished, you’ll feel confident working with real big data to transform companies and revolutionize the way we think about dynamic intelligence.

Check out the Machine Learning and Data Science Certification Training Bundle today. It’s currently just $41.

https://electrek.co/2019/02/08/tesla-model-3-new-record-charge-rate-125-kw-ccs/

Tesla Model 3 reaches new record charge rate of 126 kW – faster on CCS than Superchargers

The first European production Tesla Model 3 stopped at a 175 kW CCS charging station and recorded a new charge rate record for the electric vehicle: 126 kW – 5 kW higher than on than on Tesla’s own Superchargers.

As we previously reported, Tesla has always used its own proprietary charging connector in its vehicles to work with Supercharger network in North America.

In Europe, the company was using the Type 2 connector, but Tesla confirmed Model 3 is getting a CCS plug instead.

Now the first production Model 3 with CCS plugs are hitting the road and we are getting some interesting data.

The Netherlands-based charging network operator FastNed spotted one of those first European Model 3 at one of its 175 kW CCS charging stations and recorded its charging cycle.

They posted the results on Twitter:

Tesla Model 3 CCS charge rate

As you can see, the Model 3 appears to reach a 126 kW charge rate, which is the new highest charge rate we have seen on a Tesla vehicle and about 6 kW higher than on Tesla’s own Superchargers.

Interestingly, Tesla recently noted that Model 3 owners, and Model S and Model X owners with a CC adapter, are going to be able to charge at up to 120 kW and that the limitation is “by the car and not the adapter” or the CCS connector.

Electrek’s Take

Very interesting. We always knew that Model 3 was capable of a higher charge rate, but it was limited at the Supercharger station.

It sounded like Tesla was also going to limit it with the CCS plug, but it now looks like they have allowed a little bump in capacity.

Looking at the chart, it’s disappointing to see the charge rate drop so radically starting at less than 50% state-of-charge.

Hopefully, there’s an update linked to the release of Supercharger V3 that should help on that front.

Regardless, it’s definitely slowly going in the right direction.

https://www.sciencealert.com/scientists-just-identified-the-brain-patterns-of-consciousness

Neuroscientists Say They’ve Identified The Unique Brain Patterns of Consciousness

DAVINIA FERNáNDEZ-ESPEJO, THE CONVERSATION
7 FEB 2019

Humans have learned to travel through space, eradicate diseases and understand nature at the breathtakingly tiny level of fundamental particles.

Yet we have no idea how consciousness – our ability to experience and learn about the world in this way and report it to others – arises in the brain.

main article imageIn fact, while scientists have been preoccupied with understanding consciousness for centuries, it remains one of the most important unanswered questions of modern neuroscience.

Now our new study, published in Science Advances, sheds light on the mystery by uncovering networks in the brain that are at work when we are conscious.

It’s not just a philosophical question. Determining whether a patient is “aware” after suffering a severe brain injury is a huge challenge both for doctors and families who need to make decisions about care.

Modern brain imaging techniques are starting to lift this uncertainty, giving us unprecedented insights into human consciousness.

For example, we know that complex brain areas including the prefrontal cortex or the precuneus, which are responsible for a range of higher cognitive functions, are typically involved in conscious thought.

However, large brain areas do many things. We therefore wanted to find out how consciousness is represented in the brain on the level of specific networks.

The reason it is so difficult to study conscious experiences is that they are entirely internal and cannot be accessed by others.

For example, we can both be looking at the same picture on our screens, but I have no way to tell whether my experience of seeing that picture is similar to yours, unless you tell me about it.

Only conscious individuals can have subjective experiences and, therefore, the most direct way to assess whether somebody is conscious is to ask them to tell us about them.

But what would happen if you lose your ability to speak? In that case, I could still ask you some questions and you could perhaps sign your responses, for example by nodding your head or moving your hand.

Of course, the information I would obtain this way would not be as rich, but it would still be enough for me to know that you do indeed have experiences.

If you were not able to produce any responses though, I would not have a way to tell whether you’re conscious and would probably assume you’re not.

Scanning for networks

Our new study, the product of a collaboration across seven countries, has identified brain signatures that can indicate consciousness without relying on self-report or the need to ask patients to engage in a particular task, and can differentiate between conscious and unconscious patients after brain injury.

When the brain gets severely damaged, for example in a serious traffic accident, people can end up in a coma. This is a state in which you lose your ability to be awake and aware of your surrounding and need mechanical support to breathe.

It typically doesn’t last more than a few days. After that, patients sometimes wake up but don’t show any evidence of having any awareness of themselves or the world around them – this is known as a “vegetative state”.

Another possibility is that they show evidence only of a very minimal awareness – referred to as a minimally conscious state. For most patients, this means that their brain still perceives things but they don’t experience them.

However, a small percentage of these patients are indeed conscious but simply unable to produce any behavioural responses.

fMRI scanner (Semiconscious/Wikipedia/Public Domain)fMRI scanner (Semiconscious/Wikipedia/Public Domain)

We used a technique known as functional magnetic resonance imaging (fMRI), which allows us to measure the activity of the brain and the way some regions “communicate” with others.

Specifically, when a brain region is more active, it consumes more oxygen and needs higher blood supply to meet its demands.

We can detect these changes even when the participants are at rest and measure how it varies across regions to create patterns of connectivity across the brain.

We used the method on 53 patients in a vegetative state, 59 people in a minimally conscious state and 47 healthy participants. They came from hospitals in Paris, Liège, New York, London, and Ontario.

Patients from Paris, Liège, and New York were diagnosed through standardised behavioural assessments, such as being asked to move a hand or blink an eye.

In contrast, patients from London were assessed with other advanced brain imaging techniques that required the patient to modulate their brain to produce neural responses instead of external physical ones – such as imagining moving one’s hand instead of actually moving it.

(Tagliazucchi et al. 2019)(Tagliazucchi et al. 2019)

We found two main patterns of communication across regions. One simply reflected physical connections of the brain, such as communication only between pairs of regions that have a direct physical link between them.

This was seen in patients with virtually no conscious experience.

One represented very complex brain-wide dynamic interactions across a set of 42 brain regions that belong to six brain networks with important roles in cognition (see image above). This complex pattern was almost only present in people with some level of consciousness.

Importantly, this complex pattern disappeared when patients were under deep anaesthesia, confirming that our methods were indeed sensitive to the patients’ level of consciousness and not their general brain damage or external responsiveness.

Research like this has the potential to lead to an understanding of how objective biomarkers can play a crucial role in medical decision making.

In the future it might be possible to develop ways to externally modulate these conscious signatures and restore some degree of awareness or responsiveness in patients who have lost them, for example by using non-invasive brain stimulation techniques such as transcranial electrical stimulation.

Indeed, in my research group at the University of Birmingham, we are starting to explore this avenue.

Excitingly the research also takes us as step closer to understanding how consciousness arises in the brain.

With more data on the neural signatures of consciousness in people experiencing various altered states of consciousness – ranging from taking psychedelics to experiencing lucid dreams – we may one day crack the puzzle.The Conversation

Davinia Fernández-Espejo, Senior Lecturer, School of Psychology and Centre for Human Brain Health, University of Birmingham.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

https://globalnews.ca/news/4926960/parkinsons-disease-deep-brain-simulation/

People with Parkinson’s disease set to receive better deep brain stimulation access

Adrian Dix made a significant announcement today with regards to treatment for patients suffering from Parkinson’s disease.

– A A +

The B.C. government is improving access to deep brain stimulation (DBS) for people with Parkinson’s disease.

DBS is treatment option for those with Parkinson’s whose symptoms can no longer be controlled with medication.

“We are establishing and expanding a provincial program at UBC Hospital that will maintain a centralized waitlist to ensure patients undergo the primary insertion DBS procedure as they are identified,” health minister Adrian Dix said. “This plan leverages solutions in the public health-care system to increase the volume of primary insertion procedures by 100 per cent over the existing baseline.”

READ MORE: Larry Gifford: Behind the Parkinson patient viral video

The ministry of health has established a plan to address wait times for DBS. The province is increasing operating-room time for the treatments and also recruiting an additional qualified neurosurgeon with sufficient experience in primary insertions.

DBS uses electrical impulses to stimulate a target area in the brain. The stimulation affects movement by altering the activity in that area of the brain.

The procedure does not destroy any brain tissue and stimulation can be changed or stopped at any time. Surgery is required to implant the equipment that produces the electrical stimulation.

WATCH BELOW: (aired Sept. 13, 2018) Larry Gifford on learning to live with Parkinson’s disease

The province announced on Tuesday that the expansion of the provincial DBS program is in addition to the government’s surgical strategy to increase surgical volumes through targeted investment, and maximizing best practices and efficiencies.

The number of primary insertion DBS surgeries will increase from a planning baseline of 36 in 2016-17 to 72 for the 2019-20 fiscal year.

DBS is described as life-changing for some people in the Parkinson’s community. CKNW Program Director Larry Gifford, who was diagnosed in 2017 with Parkinson’s, wrote about the procedure and noted the DBS is not a Parkinson’s cure.

READ MORE: When Life Gives You Parkinson’s podcast recap — Parkinson’s doesn’t have to be a career killer

“It’s a late-stage option for people with Parkinson’s who no longer see results from Levadopa-Carbidopa medication, which creates synthetic dopamine. In cases that qualify for DBS, fine wires are inserted into parts of the brain and are electrically stimulated. Usually, the wires connect to a battery that is implanted,” Gifford wrote.

“For approximately one per cent of Parkinson’s patients worldwide who experienced extreme physical symptoms and received the treatment, deep brain stimulation is miracle-like. DBS can improve tremor, rigidity, slow movement and walking problems. A friend of mine in the U.K., David Sangster, is hoping to get DBS and has been documenting his journey on YouTube.”

WATCH: Parkinson Superwalk raises awareness, funds

Gifford describes Parkinson’s as a movement disorder, but more than a shake, a tremor, or a halted gait. Parkinson’s is a collection of symptoms which, in addition to everything you see, includes many non-physical symptoms like loss of smell, bladder issues, depression, anxiety, sleeping issues and more. There is no cure.

“People throughout B.C. with Parkinson’s disease will benefit from expanded access to deep brain stimulation procedures,” Vancouver Coastal Health’s head of neurosurgery Dr. Gary Redekop said. “We are committed to supporting the health, wellness and active lifestyles of our patients, and with these expanded services, more people with Parkinson’s disease will benefit from this life-changing surgery.”

As of January 2019, approximately 70 patients were waiting for primary DBS insertions.

https://www.extremetech.com/computing/285054-everything-we-know-about-the-raspberry-pi-4

Everything We Know About the Raspberry Pi 4

https://www.medicalnewstoday.com/articles/324347.php

Fasting boosts metabolism and fights aging

Published
The latest study to explore the impact of fasting on the human body concludes that it increases metabolic activity more than previously realized and may even impart anti-aging benefits.
Woman drinking water in the street

A recent study takes a look at how fasting influences metabolism.

Studies have shown that intermittent fasting can help certain people lose weight.

Although researchers are still debating exactly how effective fasting can be for weight loss, new research hints at other benefits.

In rats, for instance, studies show that fasting can increase lifespan.

Although exciting, evidence of this in humans has yet to be seen.

The most recent study — which the authors have now published in the journal Scientific Reports — takes a fresh look at fasting in humans and provides new insight.

“Recent aging studies have shown that caloric restriction and fasting have a prolonging effect on lifespan in model animals,” says first study author Dr. Takayuki Teruya, “but the detailed mechanism has remained a mystery.”

In particular, scientists at the Okinawa Institute of Science and Technology Graduate University in Japan examined its impact on metabolism.

By understanding the metabolic processes involved, the team hopes to find ways of harnessing the benefits of fasting without the need to go without food for prolonged periods.

To investigate, they fasted four volunteers for 58 hours. Using metabolomics, or the measurement of metabolites, the researchers analyzed whole blood samples at intervals during the fasting period.

What happens during fasting?

As the human body is starved of food, there are a number of distinct metabolic changes that occur.

Normally, when carbohydrates are readily available, the body will use them as fuel. But once they are gone, it looks elsewhere for energy. In a process called gluconeogenesis, the body derives glucose from noncarbohydrate sources, such as amino acids.

Scientists can find evidence of gluconeogenesis by assessing the levels of certain metabolites in the blood, including carnitines, and butyrate.

As expected, after fasting, the levels of these metabolites were present in the participants’ blood. However, the scientists also identified many more metabolic changes, some of which surprised them. For instance, they saw a marked increase in products of the citric acid cycle.

The citric acid cycle happens in mitochondria, and its function is to release stored energy. The hike seen in the metabolites associated with this process means that the mitochondria, the fabled powerhouses of the cell, are thrust into overdrive.

Another surprise finding was an increase in levels of purine and pyrimidine, which scientists had not yet linked to fasting.

These chemicals are a sign of increased protein synthesis and gene expression. This suggests that fasting causes cells to switch up the type and quantity of proteins that they need to function.

Fasting promotes anti-aging compounds

Higher levels of purine and pyrimidine are clues that the body might be increasing levels of certain antioxidants. Indeed, the researchers noted substantial increases in certain antioxidants, including ergothioneine and carnosine.

In an earlier study, the same team of researchers showed that, as we age, a number of metabolites decline. These metabolites include leucine, isoleucine, and ophthalmic acid.

In their latest study, they showed that fasting boosted these three metabolites. They explain that this might help explain how fasting extends lifespan in rats.

In all four subjects, the researchers identified 44 metabolites that increased during fasting, some of which increased 60-fold.

Of these 44, scientists had linked just 14 to fasting before. The authors conclude that “[c]ollectively, fasting appears to provoke a much more metabolically active state than previously realized.”

These are very important metabolites for maintenance of muscle and antioxidant activity […]. This result suggests the possibility of a rejuvenating effect by fasting, which was not known until now.”

Dr. Takayuki Teruya

The scientists believe that a hike in antioxidants might be a survival response; during starvation, our bodies can experience high levels of oxidative stress. By producing antioxidants, it might help avoid some of the potential damage caused by free radicals.

Next, they want to replicate the results in a larger sample. They also want to identify possible ways of harnessing the beneficial effects of fasting and find out whether they can trigger the effects of caloric restriction without having to restrict caloric intake.

Although it will be some time before we can reap the benefits of fasting without the effort, the current findings provide further evidence of the health benefits of fasting.

https://www.iphoneincanada.ca/news/apple-tv-cost-homepod-loss-gruber/

Apple blogger and insider John Gruber has shared some little nuggets of info from his famous “little birdies”, regarding Apple TV and HomePod sales—which don’t apparently make any money for Apple on hardware sales:

One thing I’ve heard from reliable little birdie is Apple effectively sells [the Apple TV] at cost. Like they really are like a $180 box. And you think wow this is amazing, it has an A10 processor which we know is super fast, it has crazy good graphics.

[…]

I’ve heard the same thing about HomePod too. Why is HomePod so much more expensive than these other speakers you can talk to? HomePod I actually have reason to believe, Apple actually sells it at a loss. I can’t prove it. I don’t think it’s a big loss.

If true, it appears Apple may be following a business model like Amazon when it comes to Apple TV and HomePod sales. That is, losing money on hardware, but making money on services from customers. Amazon has been known to sell their Kindle e-book readers at a loss, only to make money when customers buy e-books.

For Apple, when customers buy an Apple TV, it ensures they are remaining within the company’s ecosystem of apps and services (iCloud, Apple Music, subscriptions, etc). The same goes for HomePod, which definitely ensures customers retain their Apple Music subscriptions, because Siri can only perform requests from the company’s own music streaming service.

According to Gruber, he proclaims “If you think it’s a problem that these products are so expensive compared to their competition, that too few people buy them, it’s not because Apple is charging too much, it’s because Apple engineered and designed too good of a product.”

Do you agree with this statement?

Gruber shared the information on his podcast, The Talk Show, episode 242.

UpdateBloomberg’s Mark Gurman has shot down this claim from Gruber with his own sources, citing HomePod is sold at a profit, and not st a loss. Gruber has traditionally downplayed Gurman’s scoops, so it’s good to see Mark reply with his own sources.

Mark Gurman

@markgurman

I’m told Apple is selling HomePods at a profit, not a loss, which wouldn’t make any sense. If it’s losing money, that’s only because it built too many speakers people don’t seem to want, and is now sitting on unsold inventory.

MacRumors.com

@MacRumors

Gruber: Apple TV is Sold at Cost, HomePod at Slight Loss https://www.macrumors.com/2019/02/01/homepod-apple-tv-apple-cost/ … by @julipuli

View image on Twitter

https://phys.org/news/2019-02-scientists-hijack-open-access-quantum-secrets.html

Scientists ‘hijack’ open-access quantum computer to tease out quantum secrets

February 1, 2019 by Louise Lerner, University of Chicago
Scientists ‘hijack’ open-access quantum computer to tease out quantum secrets
Researchers used IBM’s Quantum Experience, an open-access quantum computer, to test fundamental principles of quantum mechanics. Credit: IBM

The rules of quantum mechanics describe how atoms and molecules act very differently from the world around us. Scientists have made progress toward teasing out these rules—essential for finding ways to make new molecules and better technology—but some are so complex that they evade experimental verification.

With the advent of open-access  computers, scientists at the University of Chicago saw an opportunity to do a very unusual experiment to test some of these quantum principles. Their study, which appeared Jan. 31 in Nature Communications Physics, essentially hijacks a quantum computer to discover fundamental truths about the quantum behavior of electrons in molecules.

“Quantum computing is a really exciting realm to explore fundamental questions. It allows us to observe aspects of quantum theory that are absolutely untouchable with classical computers,” said Prof. David Mazziotti, professor of chemistry and author on the paper.

One particular rule of quantum mechanics, called the Pauli exclusion principle, is that two electrons cannot occupy the same position in space at the same time. In many cases, a molecule’s electrons experience additional restrictions on their locations; these are known as the generalized Pauli constraints. “These rules inform the way that all molecules and matter form,” said Mazziotti.

In this study, Mazziotti, Prof. David Shuster and graduate student Scott Smart created a set of algorithms that would ask IBM’s Q Experience computer to randomly generate quantum states in three-electron systems, and then measure where the electrons are most probably located.

“Suppose that the generalized Pauli constraints were not true: In that scenario, about half of the quantum states would exhibit a violation,” said Smart, the first author on the paper. Instead, in the many quantum states formed, they found that violations of generalized Pauli constraints occurred very rarely in a pattern consistent with noise in the quantum circuit.

The results provide strong experimental verification, the scientists said.

“The simplest generalized Pauli constraints were discovered theoretically on a classical computer at IBM in the early 1970s, so it is fitting that for the first time they would be experimentally verified on an IBM quantum ,” Mazziotti said.

The discovery is another breakthrough at the frontier of quantum efforts at the University; recent efforts have included a three-laboratory quantum “teleporter,” steps toward more powerful quantum sensors, and a collaboration to develop algorithms for emerging quantum computers.

An open question is how the generalized Pauli constraints may be useful for improving quantum technology. “They will potentially contribute to achieving more efficient quantum calculations as well as better error correction schemes—critical for quantum computers to reach their full potential,” Mazziotti said.

 Explore further: First proof of quantum computer advantage

More information: Scott E. Smart et al. Experimental data from a quantum computer verifies the generalized Pauli exclusion principle, Communications Physics (2019). DOI: 10.1038/s42005-019-0110-3

Read more at: https://phys.org/news/2019-02-scientists-hijack-open-access-quantum-secrets.html#jCp