https://phys.org/news/2021-01-rare-quadruple-helix-dna-human-cells.html


Rare quadruple-helix DNA found in living human cells with glowing probes

by Hayley Dunning, Imperial College London

Rare quadruple-helix DNA found in living human cells with glowing probes
Quadruple-helix DNA. Credit: Imperial College London

New probes allow scientists to see four-stranded DNA interacting with molecules inside living human cells, unraveling its role in cellular processes.

DNA usually forms the classic double helix shape of two strands wound around each other. While DNA can form some more exotic shapes in test tubes, few are seen in real living cells.

However, four-stranded DNA, known as G-quadruplex, has recently been seen forming naturally in human cells. Now, in new research published today in Nature Communications, a team led by Imperial College London scientists have created new probes that can see how G-quadruplexes are interacting with other molecules inside living cells.

G-quadruplexes are found in higher concentrations in cancer cells, so are thought to play a role in the disease. The probes reveal how G-quadruplexes are ‘unwound’ by certain proteins, and can also help identify molecules that bind to G-quadruplexes, leading to potential new drug targets that can disrupt their activity.

Needle in a haystack

One of the lead authors, Ben Lewis, from the Department of Chemistry at Imperial, said: “A different DNA shape will have an enormous impact on all processes involving it—such as reading, copying, or expressing genetic information.

“Evidence has been mounting that G-quadruplexes play an important role in a wide variety of processes vital for life, and in a range of diseases, but the missing link has been imaging this structure directly in living cells.”

G-quadruplexes are rare inside cells, meaning standard techniques for detecting such molecules have difficulty detecting them specifically. Ben Lewis describes the problem as “like finding a needle in a haystack, but the needle is also made of hay.”

To solve the problem, researchers from the Vilar and Kuimova groups in the Department of Chemistry at Imperial teamed up with the Vannier group from the Medical Research Council’s London Institute of Medical Sciences.

Rare quadruple-helix DNA found in living human cells with glowing probes
Fluorescence lifetime imaging microscopy map of nuclear DNA in live cells stained with the new probe. Colours represent fluorescence lifetimes between 9 (red) and 13 (blue) nanoseconds. Credit: Imperial College London

They used a chemical probe called DAOTA-M2, which fluoresces (lights up) in the presence of G-quadruplexes, but instead of monitoring the brightness of fluorescence, they monitored how long this fluorescence lasts. This signal does not depend on the concentration of the probe or of G-quadruplexes, meaning it can be used to unequivocally visualize these rare molecules.https://googleads.g.doubleclick.net/pagead/ads?guci=2.2.0.0.2.2.0.0&client=ca-pub-0536483524803400&output=html&h=280&slotname=5350699939&adk=2265749427&adf=1857921027&pi=t.ma~as.5350699939&w=750&fwrn=4&fwrnh=100&lmt=1610138020&rafmt=1&psa=1&format=750×280&url=https%3A%2F%2Fphys.org%2Fnews%2F2021-01-rare-quadruple-helix-dna-human-cells.html&flash=0&fwr=0&rpe=1&resp_fmts=3&wgl=1&uach=WyJNYWMgT1MgWCIsIjEwXzExXzYiLCJ4ODYiLCIiLCI4Ny4wLjQyODAuODgiLFtdXQ..&dt=1610138020041&bpp=122&bdt=11372&idt=608&shv=r20201203&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3D6d20cec83a9677a1-22c493fe55c20058%3AT%3D1595014948%3AR%3AS%3DALNI_MZJCuPZLUdRM6AO3kXi5hBFw_OsUA&correlator=8632064748462&frm=20&pv=2&ga_vid=981691580.1517602527&ga_sid=1610138021&ga_hid=1916985191&ga_fc=0&u_tz=-480&u_his=1&u_java=0&u_h=1050&u_w=1680&u_ah=980&u_aw=1680&u_cd=24&u_nplug=3&u_nmime=4&adx=334&ady=2722&biw=1678&bih=900&scr_x=0&scr_y=0&eid=42530671%2C21068769%2C21069720&oid=3&pvsid=1449987869858393&pem=46&ref=https%3A%2F%2Fnews.google.com%2F&rx=0&eae=0&fc=896&brdim=2%2C23%2C2%2C23%2C1680%2C23%2C1678%2C980%2C1678%2C900&vis=1&rsz=%7C%7CpeEbr%7C&abl=CS&pfx=0&fu=8320&bc=31&ifi=1&uci=a!1&btvi=1&fsb=1&xpc=iy9FCW1MHN&p=https%3A//phys.org&dtd=709

Dr. Marina Kuimova, from the Department of Chemistry at Imperial, said: “By applying this more sophisticated approach we can remove the difficulties which have prevented the development of reliable probes for this DNA structure.”

Looking directly in live cells

The team used their probes to study the interaction of G-quadruplexes with two helicase proteins—molecules that ‘unwind’ DNA structures. They showed that if these helicase proteins were removed, more G-quadruplexes were present, showing that the helicases play a role in unwinding and thus breaking down G-quadruplexes.

Dr. Jean-Baptiste Vannier, from the MRC London Institute of Medical Sciences and the Institute of Clinical Sciences at Imperial, said: “In the past we have had to rely on looking at indirect signs of the effect of these helicases, but now we take a look at them directly inside live cells.”

They also examined the ability of other molecules to interact with G-quadruplexes in living cells. If a molecule introduced to a cell binds to this DNA structure, it will displace the DAOTA-M2 probe and reduce its lifetime, i.e. how long the fluorescence lasts.

This allows interactions to be studied inside the nucleus of living cells, and for more molecules, such as those which are not fluorescent and can’t be seen under the microscope, to be better understood.

Professor Ramon Vilar, from the Department of Chemistry at Imperial, explained: “Many researchers have been interested in the potential of G-quadruplex binding molecules as potential drugs for diseases such as cancers. Our method will help to progress our understanding of these potential new drugs.”

Peter Summers, another lead author from the Department of Chemistry at Imperial, said: “This project has been a fantastic opportunity to work at the intersection of chemistry, biology and physics. It would not have been possible without the expertise and close working relationship of all three research groups.”

The three groups intend to continue working together to improve the properties of their probe and to explore new biological problems and shine further light on the roles G-quadruplexes play inside our living cells. The research was funded by Imperial’s Excellence Fund for Frontier Research.


Explore furtherDesigner molecule shines a spotlight on mysterious four-stranded DNA


More information: Peter A. Summers et al. Visualizing G-quadruplex DNA dynamics in live cells by fluorescence lifetime imaging microscopy, Nature Communications (2021). DOI: 10.1038/s41467-020-20414-7Journal information:Nature CommunicationsProvided by Imperial College London

https://www.thestar.com/life/health_wellness/2021/01/08/the-sleep-diet-an-idea-whose-time-has-come.html


The sleep diet — an idea whose time has come

Christine Sismondo

By Christine SismondoSpecial to the StarFri., Jan. 8, 2021timer5 min. readupdateArticle was updated 7 hrs agoJOIN THE CONVERSATION

For a lot of people, resolving to lose weight in the New Year translates into a lot of early mornings.

Some get up before dawn to get a run in while the city streets are still empty. Others want to get started on their daily juicing regimen and/or get a jump on the meal planning and packing up the day’s carefully portioned lunch.

Developments in the science of sleep, however, suggest that, instead, hitting the snooze button might be one of the most important tools we have for actually cutting back our calorie intake. Finally, a lifestyle intervention for slackers that involves calling it early, sleeping in, and, of course, plenty of late-morning and mid-afternoon naps.

The Sleep Diet — an idea whose time has come.

The past few years have seen a lot of peer-reviewed research fleshing out the sleep-diet connection and, at this point, there’s really no doubt that not getting enough sleep is a serious risk factor for obesity. And, now, studies working the reverse angle — that sleep might actually help people lose weight — are starting to emerge.

One study published in the “American Journal of Clinical Nutrition” found that sleep extension in research subjects was associated with reduced “free sugar” intake (sugar added to any food or drink, as opposed to natural sugars) by over nine grams per day.

“We started with people who weren’t sleeping enough and taught them to sleep more,” explains lead researcher Haya Al Khatib, nutritional scientist and guest lecturer at London, England’s King’s College. “Our primary objective was to see if is it possible to get people to sleep more but we also collected diet data. We found that, yes, it is possible to get people to achieve more and better sleep, but also found that people who slept more tended to eat less sugar.”

Haya Al Khatib, nutritional scientist and guest lecturer at King's College in London, England.

Al Khatib cautions that it was a small study that wasn’t really “powered” to collect all of the diet data. The first order of business had been to establish that you could help people get more rest, which they did thanks to “sleep hygiene” — consistent eating and bedtime schedules; making the room dark and getting rid of all electronic devices, including the television, from the bedroom. Other research is planned to see if the quality of our sleep can affect more than just sugar consumption.

YOU MIGHT BE INTERESTED IN…

RELATIONSHIPSADVICEI slept with a friend of my wife’s years before I met her, but my wife says I’m ‘scum’: Ask Ellie9 hrs agoANALYSISI recognized something in the crowds storming the U.S. Capitol. I’ve seen it — and them — before1 day ago

A decreased sugar intake wouldn’t have been a huge surprise to anyone closely following the emerging field of sleep research, given how much we now know about the correlation between poor sleep and heart disease, diabetes, depression and certain cancers.

“Sleep is incredibly powerful, specifically when it comes to risk for overweight and obesity,” says Greg Wells, Toronto-area author and performance physiologist. “And we know that, when we sleep, we regulate two specific hormones, leptin and ghrelin, that control our appetite and satiety—how full we feel. When you get a good night’s sleep, those hormones are regulated and that enables you to make good decisions around food.”

Greg Wells, author and sleep expert.

Leptin reduces our appetite; ghrelin increases it. If these “hunger hormones,” (sometimes called “starvation” hormones) are out of whack, they send a signal to tell our brains that we are starving, which causes us to experience intense cravings for sugar and high-fat food. Wells says the signals can be so powerful that they make our food choices essentially completely out of our control.

That’s easy to understand on an intuitive level. Most days, my breakfast is berries and pecans, which I eat after my daily espresso (no dairy, no sugar). None recently, but back when I used to go places, if I had a super-early flight, I’d make a bee-line for the first Starbucks after the security gate, to order the fattiest breakfast sandwich available and a giant frothy beverage with all of the sugar packets.

Al Khatib says that, since sleep and diet is an emerging field, the precise mechanisms aren’t all fleshed out yet and there are still several hypotheses as to why we compensate for lost sleep with high-calorie food, but that we should probably also factor in stress.

“If you’re not sleeping enough, you’re less resilient to stress the next day and, when you’re more stressed, that impacts your eating habits,” she explains. “So, it’s not just about food, it’s our stress levels, energy levels and our ability to move around the next day because you probably won’t exercise.”

And that all will lead to a poor night’s sleep, since exercise will help you get a good night’s rest and fatty food will do the exact opposite. It’s sort of like the shame spiral from “The Simpsons,” except for sleep or, as Wells refers to it, “the ripple effect.”

https://www.cnbc.com/2021/01/08/openai-shows-off-dall-e-image-generator-after-gpt-3.html

Why everyone is talking about an image generator released by an Elon Musk-backed A.I. lab 

PUBLISHED FRI, JAN 8 20216:42 AM ESTUPDATED FRI, JAN 8 20217:40 AM ESTSam Shead@SAM_L_SHEADSHAREShare Article via FacebookShare Article via TwitterShare Article via LinkedInShare Article via EmailKEY POINTS

  • OpenAI has trained a piece of software, known as Dall-E, to generate images from short text captions.
  • It demoed how the AI could create armchairs in the shape of avocados and baby daikon radishes wearing tutus.
  • Dall-E comes just a few months after OpenAI announced it had built a text generator called GPT-3.
SpaceX founder Elon Musk looks on at a post-launch news conference after the SpaceX Falcon 9 rocket, carrying the Crew Dragon spacecraft, lifted off on an uncrewed test flight to the International Space Station from the Kennedy Space Center in Cape Canaveral, Florida, March 2, 2019.

SpaceX founder Elon Musk looks on at a post-launch news conference after the SpaceX Falcon 9 rocket, carrying the Crew Dragon spacecraft, lifted off on an uncrewed test flight to the International Space Station from the Kennedy Space Center in Cape Canaveral, Florida, March 2, 2019.Mike Blake | Reuters

Armchairs in the shape of avocados and baby daikon radishes wearing tutus are among the quirky images created by a new piece of software from OpenAI, an Elon Musk-backed artificial intelligence lab in San Francisco.

OpenAI trained the software, known as Dall-E, to generate images from short text captions. It specifically used a dataset of 12 billion images and their captions, which were found on the internet.

The lab said Dall-E — a portmanteau of Spanish surrealist artist Salvador Dali and Wall-E, a small animated robot from the Pixar movie of the same name — had learned how to create images for a wide range of concepts.

OpenAI showed off some of the results in a blog post published on Tuesday. “We’ve found that it [Dall-E] has a diverse set of capabilities, including creating anthropomorphized versions of animals and objects, combining unrelated concepts in plausible ways, rendering text, and applying transformations to existing images,” the company wrote.https://platform.twitter.com/embed/index.html?creatorScreenName=Sam_L_Shead&dnt=false&embedId=twitter-widget-0&frame=false&hideCard=false&hideThread=false&id=1346554999241809920&lang=en&origin=https%3A%2F%2Fwww.cnbc.com%2F2021%2F01%2F08%2Fopenai-shows-off-dall-e-image-generator-after-gpt-3.html&siteScreenName=CNBC&theme=light&widgetsVersion=ed20a2b%3A1601588405575&width=550px

Dall-E is built on a neural network, which is a computing system vaguely inspired by the human brain that can spot patterns and recognize relationships between vast amounts of data.

While neural networks have generated images and videos before, Dall-E is unusual because it relies on text inputs whereas the others don’t.

Synthetic videos and images have become more sophisticated in recent years to the extent that it has become hard for humans to distinguish between what is real and what is computer-generated. General adversarial networks (GANs), which employ two neural networks, have been used to create fake videos of politicians, for example.

OpenAI acknowledged that Dall-E has the “potential for significant, broad societal impacts,” adding that it plans to analyze how models like Dall-E “relate to societal issues like economic impact on certain work processes and professions, the potential for bias in the model outputs, and the longer term ethical challenges implied by this technology.”

GPT-3 successor

Dall-E comes just a few months after OpenAI announced it had built a text generator called GPT-3 (Generative Pre-training), which is also underpinned by a neural network.

The language-generation tool is capable of producing human-like text on demand and it became relatively famous for an AI program when people realized it could write its own poetry, news articles and short stories.

“Dall-E is a Text2Image system based on GPT-3 but trained on text plus images,” Mark Riedl, associate professor at the Georgia Tech School of Interactive Computing, told CNBC.

“Text2image is not new, but the Dall-E demo is remarkable for producing illustrations that are much more coherent than other Text2Image systems I’ve seen in the past few years.”

OpenAI has been competing with firms like DeepMind and the Facebook AI Research group to build general purpose algorithms that can perform a wide range of tasks at human-level and beyond.

Researchers have built AIs that can play complex games like chess and the Chinese board game of Go, translate one human language to another, and spot tumors in a mammogram. But getting an AI system to show genuine “creativity” is a big challenge in the industry.

Riedl said the Dall-E results show it has learned how to blend concepts coherently, adding that “the ability to coherently blend concepts is considered a key form of creativity in humans.”

“From the creativity standpoint, this is a big step forward,” Riedl added. “While there isn’t a lot of agreement about what it means for an AI system to ‘understand’ something, the ability to use concepts in new ways is an important part of creativity and intelligence.”

Neil Lawrence, the former director of machine learning at Amazon Cambridge, told CNBC that Dall-E looks “very impressive.”

Lawrence, who is now a professor of machine learning at the University of Cambridge, described it as “an inspirational demonstration of the capacity of these models to store information about our world and generalize in ways that humans find very natural.”

He said: “I expect there will be all sorts of applications of this type of technology, I can’t even begin to imagine. But it’s also interesting in terms of being another pretty mind-blowing technology that is solving problems we didn’t even know we actually had.”

‘Doesn’t advance the state of AI’

Not everyone is that impressed by Dall-E, however.

Gary Marcus, an entrepreneur who sold a machine-learning start-up to Uber in 2016 for an undisclosed sum, told CNBC that it’s interesting but it “doesn’t advance the state of AI.”

He also pointed out that it hasn’t been opened sourced and the company hasn’t yet published an academic paper on the research.

Marcus has previously questioned whether some of the research published by rival lab DeepMind in recent years should be classified as “breakthroughs.” 

OpenAI was set up as a non-profit with a $1 billion pledge from a group of founders that included Tesla CEO Elon Musk. In February 2018, Musk left the OpenAI board but he continues to donate and advise the organization.

OpenAI made itself for-profit in 2019 and raised another $1 billion from Microsoft to fund its research. GPT-3 is set to be OpenAI’s first commercial product and Reddit has signed up as one of the first customers.

https://www.newscientist.com/article/2264168-crispr-doubles-lifespan-of-mice-with-rapid-ageing-disease-progeria/

CRISPR doubles lifespan of mice with rapid ageing disease progeria

HEALTH 6 January 2021

By Michael Le Page

A cell from a person with progeria
A cell from a person with progeria

CRISPR gene editing has been used to more than double the lifespan of mice engineered to have the premature ageing disease progeria, also greatly improving their health.

The results far surpassed expectations. Progeria affects many different organs in the body, and the team behind the work didn’t expect that correcting the mutation in a relatively low proportion of cells – 10 to 60 per cent – would have such a big effect. “We were quite amazed,” says David Liu at Harvard University.

Hutchinson-Gilford progeria syndrome is a rare condition caused when a mutation, which probably took place in the testes or ovaries of a child’s parents, results in a single DNA letter change in one of the two copies of the gene for the lamin A protein. This leads to the production of an abnormal protein called progerin that interferes with cell division and causes many symptoms of premature ageing. The average lifespan of children with progeria is 14 years.

Conventional gene therapy, which involves adding genes, cannot help. People with progeria still have one healthy copy of the lamin A gene – the problem is the mutant progerin protein.

The standard form of CRISPR gene editing, which involves cutting DNA with the Cas9 protein, can be used to disable the mutant gene. The trouble is that it often disables the healthy copy too, as well as causing other unwanted changes.

Liu’s team has been modifying the Cas9 protein so instead of cutting DNA, it changes one DNA letter to another, a process known as base editing. He and his colleagues have now used a CRISPR base editor to correct the single-letter change that causes almost all cases of progeria, first in skin cells taken from a person with progeria and then in mice with a human version of the lamin A gene.

Read more: The powerhouses inside cells have been gene-edited for the first time

A virus carrying the genes for the base editor was injected into the blood of 2-week-old mice – roughly equivalent to 5-year-old children, says Liu – with the progeria mutation. A single injection boosted the median lifespan from 215 to 510 days, and the treated mice were also far more active.

Because the mice had the human gene, exactly the same approach could be used in human trials. However, Liu’s team has already developed even more efficient base editors.

In November 2020, the US Food and Drug Administration approved the first-ever drug for treating progeria. In trials, it increased lifespan by an average of 2.5 years over the maximum follow-up time of 11 years. Liu thinks combining this drug with the CRISPR base editing will work well.

The findings also boost hopes that many other conditions could be treated through base editing. Half of all known disease-causing mutations involve a single-letter change, most of which can be corrected with existing base editors, says Liu.

Journal reference: NatureDOI: 10.1038/s41586-020-03086-7

Read more: https://www.newscientist.com/article/2264168-crispr-doubles-lifespan-of-mice-with-rapid-ageing-disease-progeria/#ixzz6iwCubx5g

https://www.theverge.com/2021/1/6/22216648/amazon-sleep-tracking-alexa-brahms-apnea-radar-device

Amazon reportedly developing radar-equipped sleep apnea tracker

3

It hopes the Alexa device will monitor for sleep apneaBy Jon Porter@JonPorty  Jan 6, 2021, 6:16am EST

Share this story

Amazon is developing a new Alexa-powered device that can track and monitor for signs of sleep apnea using radar, according to a new report from Business Insider. The palm-sized device is reportedly designed to sit on a bedside table and use millimeter-wave radar to sense your breathing, keeping an eye out for interruptions associated with the apnea sleeping disorder.

The idea of using radar to monitor sleep isn’t new, and at least one other high profile company has attempted to commercialize the technology. Way back in 2014 Nintendo announced a “non-wearable” device that could track sleep via radio waves. However, less than two years later Nintendo said it wasn’t confident the device could become a viable product, and it was never released. Last month we also saw OnePlus announce a new concept phone that used mmWave radar to monitor breathing.DEVELOPED UNDER THE CODE NAME “BRAHMS”

Amazon’s project is apparently being developed under the code name “Brahms” after the German composer of Lullaby, and is the work of an internal Amazon team built up over the past year. In its current form, the device reportedly resembles a “standing hexagonal pad connected to a metal wire base,” Business Insider notes. Along with sleep apnea, Amazon reportedly plans to use its machine-learning and cloud technology to understand other sleeping disorders beyond sleep apnea.

When contacted for comment, a spokesperson for Amazon told The Verge that the company does not comment or rumors and speculation.

If accurate, Brahms represents Amazon’s latest push into health tech. Last year the company released its Halo fitness tracker, a $99.99 wearable device that scans the wearer’s body and voice and is designed to help you improve your health. Amazon stresses that Halo is “not a medical device.” The company has also launched a Pharmacy service for delivering prescription medication.

At this point it’s almost easier to list objects that Amazon hasn’t tried to build its voice assistant into. Over the past few years Alexa has appeared in everything from speakers (obviously) to glassesrings, and even microwaves. Soon, we might be able to add a sleep-tracker to that list.

Update January 6th, 11:24AM ET: Updated with Amazon’s response.

https://www.cnnphilippines.com/world/2021/1/7/Better-Brain-at-Any-Age.html


Dr. Sanjay Gupta: Memory fades as we age. But it doesn’t have to

By Dr. Sanjay Gupta, CNN Chief Medical Correspondent

Published Jan 7, 2021 7:07:43 AMhttps://www.facebook.com/plugins/like.php?href=http://www.cnnphilippines.com/world/2021/1/7/Better-Brain-at-Any-Age.html?fbclid&width=451&layout=standard&action=like&size=small&show_faces=false&share=true&height=35&appId=343451476306798

Editor’s note: CNN Chief Medical Correspondent Dr. Sanjay Gupta is a practicing neurosurgeon and the author of the new book “Keep Sharp: Build a Better Brain at Any Age.”

(CNN) — Ten months into the pandemic, I turned 51 and did the math: I’m entering the final third of my life. I know that sounds grim, and I am hoping to get more time than that, but I often do these mental calculations because the clock of life inspires me to make the most of the years that remain. That constant tick-tock reminds me to fill the final decades with invigorating experiences to bank in my inner black box — a delightful cache of memories to replay over and over in my mind like a favorite movie.

In order for my plan to work, however, I have to invest in my brain now to ensure that it stays sharp into ripe old age, even if my body starts to betray me. Accomplishing this is well within my reach, and starts with a basic truth: Unlike most any other organ in the body, our brains are not pre-ordained to wither away, lose power, blunt their edge or, worst of all, become forgetful.

Memories make us feel alive, capable and valuable. They help us feel comfortable with our surroundings, connect the past with the present, and provide a framework for the future. Truth is, this past year has resulted in a decade’s worth of memories for me. Besides continuing to operate at the hospital, I have been reporting around the clock from my windowless home basement on every aspect of the novel coronavirus — how it moves, the molecular keys it uses to gain entry, and what havoc it causes after entering the cells of the human body. And, when it became clear that Covid-19 was causing neurological deficits, from minor ones like temporary loss of smell and brain fog to more serious symptoms of a stroke, my worlds of brain surgeon and medical correspondent collided.

Over the last year, I have seen a movement gather steam unlike I have ever seen before. Within months of first identifying this novel virus, a global consortium of research scientists was established to study the relationship between Covid-19 and the brain. Among other things, these scientists are also re-examining a provocative idea: the possibility that certain infections raise the risk of cognitive decline and even the most common, dreaded form of dementia, Alzheimer’s disease. It is a frightening prospect that should also motivate us to redouble our efforts to control overall risk factors for dementia and make our brains as resilient and sharp as possible. And the good news is that we have the tools to do this.

In my role as a doctor and public educator, I’ve noticed that people tend to have a limited view of what their brains are capable of doing as they age, and the power they have to make themselves better, faster, fitter and, yes, sharper. I think because the brain is encased in a hard shell of bone, many assume it is a black box, only measured by its inputs and outputs. Immutable, impenetrable, indecipherable and unable to be changed or improved. Up until recently, we used to think the brain was largely fixed with a certain number of brain cells, and as the years wear on, the neurons die off, the networks dim, and things like memory and processing speed take a hit.

But what if I told you that most of what we believed about the brain at the beginning of this century has already been proven wrong or incomplete? And that memory loss and brain atrophy are not inevitable?

A lot has happened in brain medicine since I got started in the field more than 20 years ago. Back then, the idea of improving my own brain seemed like a misguided quest. Most people ages 34 to 75 understand the vital importance of brain health but also have no idea how to make their brains healthier or realize that it is even possible. They believe their fate is baked into their DNA and nothing can be done to change that. They would have a hard time accepting what countless studies have shown: that the brain simply prefers a body in motion, and that it doesn’t take much to reap enormous benefits. They would think I was being Pollyannish to suggest a mere 2 minutes of activity every hour can boost brain health more so than anything else they could possibly be doing right now. Their entire mindset would shift. Exercise wouldn’t necessarily be thought of as the cure, but rather inactivity as the disease. Just move. Every time you are about to sit, ask yourself: Could I stay standing instead?

While we enjoy lower rates of cardiovascular disease and certain cancers than a generation ago, the numbers are going in the other direction when it comes to brain-related impairment. One new case of dementia will soon be diagnosed every 4 seconds, and it will be the most common neurodegenerative disorder in the country. It’s time to change this trend. In recent years, I traveled the world and relied on my training in neuroscience and my reporter’s mentality to understand how to make that happen.

As things stand now, 47 million Americans have some evidence of preclinical Alzheimer’s disease, which means their brains show the signs of dementia but they feel and think fine. It’s like an approaching storm that is still way off in the distance, taking decades before memory, thinking and behavior are affected. This pre-clinical time, however, is a golden window during which we can significantly optimize our brains to improve its functionality, boost its neuronal networks, stimulate the growth of new neurons and help stave off age-related brain illnesses.

Dementia is not a normal part of aging and older people are not doomed to forget things. Typical age-related changes in the brain are not the same as changes that are caused by disease. The former can be slowed down and the latter can be avoided. According to the best available evidence, significant upgrades can be made to the brain within just 12 weeks. There are habits you should develop and make your own, while also learning what to avoid.

A quarter of Americans over 50 take “brain-boosting” supplements, but after two years of investigating, I could find little proof they improve memory, sharpen attention and focus, or prevent cognitive decline or dementia, no matter what the manufacturers claim. It is true that absence of evidence does not necessarily mean evidence of absence. Even in well-constructed trials, however, the same result came back repeatedly. A large 2020 study led by Harvard further showed that multivitamin or mineral supplements don’t improve overall health and any perceived benefits “may be all in the mind.” Unless you are deficient in a particular nutrient, vitamins do not take the place of real food, and some can even be harmful. I tell patients to follow the SHARP dietary protocol: Slash sugar; Hydrate (even being dehydrated a few ounces can affect cognition); Add more omega-3 fatty acids from foods like cold-water fish, nuts, and seeds; Reduce portion; and Plan ahead. I also tell them to spend their money on something proven to ultimately help the brain, like a comfortable pair of shoes for walking or a new pillow for a good night’s sleep.

After years of losing sleep over my globetrotting reporting of natural disasters and wars, I prioritize slumber now and sweat it out regularly because I know what the science says. Restorative sleep and exercise are antidotes to mental decline. They are matchless medicine we can’t get elsewhere. Sleep tidies memory while physical activity pumps out substances in the brain that act like a fertilizer on brain cells for their growth and survival. This allows us to continually learn new skills and explore new hobbies that are stimulating, de-stressing, and rewarding — all good things for staying sharp.

Surprisingly, leisure activities like gardening, playing cards, attending cultural events and using a computer are not as protective against dementia as we once thought. A 2020 study found no association between actively engaging in leisure activities at age 56 and the incidence of dementia over the following 18 years. And completing crossword puzzles may not keep your brain young either. Unfortunately, crosswords flex only one part of your brain, which is word finding (also called fluency). They might help you excel at that, but they won’t necessarily keep your brain sharp in any general sense.

A better strategy, in addition to playing mind-bending games on your own, is to engage with others and work on your relationships. Another recent finding in science has been that the strength of our relationships is much more essential to our health — and healthspan, or how long we live in good health — than previously recognized. Instead of spending time passively using a computer screen to binge-watch shows or scroll aimlessly through the internet, use that time in virtual chats with friends and family. As I like to say, connection for protection, even when physically distanced. And when you can see people in person, focus on eye contact; it’s more important than ever to ease the stress of masked faces. As loneliness researcher Stephanie Cacioppo told me, the eyes reflect more authentic emotion as well.

If you put it all together, one of the best things you can do for your brain: Take a brisk walk with a close friend and discuss your problems.

These strategies may sound extraordinarily simple and perhaps quaint, but they work. As someone who has had a love affair with the brain since I was a teenager, I admit that I’m biased, but I whole heartedly believe that all roads to health and happiness start with the brain. Your brain is command central for itself and the body, and it is possible to make your brain sharper than it has ever been in the past. Do not accept the false idea that brain decline is unavoidable.

While many organs do wither and decline with age, the brain is different, and it is well within your reach to stay cognitively intact into old age. I have seen it over and over again in patients I’ve treated and people I’ve encountered in my work as a journalist.

Like you, I won’t forget this past year. The combination of both a public health crisis and a tumultuous election has pushed a lot of limits in our mental wellbeing. It’s moments like these, however, when anxieties and uncertainties run high, that I take comfort in knowing there are things I can control. No one can predict the future, but each one of us can do our part to plan for a long and mentally sharp, resilient one.

https://medicalxpress.com/news/2021-01-brain-mechanism-underlying-vision-revealed.html


A brain mechanism underlying ‘vision’ in the blind is revealed

by Weizmann Institute of Science

A brain mechanism underlying 'vision' in the blind is revealed
The visual centers in those seeing a film or imagining as instructed had similar timing in their brain activity, while those experiencing spontaneous hallucinations showed a gradual increase in slow fluctuations. Credit: Weizmann Institute of Science

Some people have lost their eyesight, but they continue to ‘see.’ This phenomenon, a kind of vivid visual hallucination, is named after the Swiss doctor Charles Bonnet, who described in 1769 how his completely blind grandfather experienced vivid, detailed visions of people, animals and objects. Charles Bonnet syndrome, which appears in those who have lost their eyesight, was investigated in a study led by scientists at the Weizmann Institute of Science. The findings, published today in Brain, suggest a mechanism by which normal, spontaneous activity in the visual centers of the brain can trigger visual hallucinations in the blind.

Prof. Rafi Malach and his group members of the Institute’s Neurobiology Department research the phenomenon of spontaneous ‘resting-state’ fluctuations in the brain. These mysterious slow fluctuations, which occur all over the brain, take place well below the threshold of consciousness. Despite a fair amount of research into these spontaneous fluctuations, their function is still largely unknown. The research group hypothesized that these fluctuations underlie spontaneous behaviors. However, it is typically difficult to investigate truly unprompted behaviors in a scientific manner for two reasons, since, for one, instructing people to behave spontaneously is usually a spontaneity-killer. Secondly, it is difficult to separate the brain’s spontaneous fluctuations from other, task-related brain activity. The question was: How could they isolate a case of a truly spontaneous, unprompted, behavior in which the role of spontaneous brain activity could be tested?

Individuals experiencing Charles Bonnet visual hallucinations presented the group with a rare opportunity to investigate their hypothesis. This is because in Charles Bonnet syndrome, the hallucinations appear at random, in a truly unprompted fashion, and the visual centers of the brain do not process outside stimuli (because these individuals are blind), and are thus activated spontaneously. In a study led by Dr. Avital Hahamy, a former research student in Malach’s lab who is now a postdoctoral research fellow at University College London, the relation between these hallucinations and the spontaneous brain activity has indeed been unveiled.

The researchers first invited to their lab five people who had lost their sight and reported occasionally experiencing clear visual hallucinations. These participants’ brain activity was measured using an fMRI scanner while they described their hallucinations as these occurred. The scientists then created movies based on the participants’ verbal descriptions, and they showed these movies to a sighted control group, also inside the fMRI scanner. A second control group consisted of blind people who had lost their sight but did not experience visual hallucinations. These were asked to imagine similar visual images while in the scanner.

The same visual areas in the brain were active in all three groups—those that hallucinated, those that watched the films and those creating imagery in their minds’ eye. But the researchers noted a difference in the timing of the neural activity between these groups. In both the sighted participants and those in the imagery group, the activity was seen to take place in response either to visual input or to the instructions set in the task. But in the group with Charles Bonnet syndrome, the scientists observed a gradually increasing wave of activity, reminiscent of the slow spontaneous fluctuations, that emerged just before the onset of the hallucinations. In other words, the hallucinations were not the result of external stimuli (eg., sensory images or instructions to imagine specific things), but were rather evoked internally by the slow, spontaneous, brain activity fluctuations.

“Our research clearly shows that the same visual system is active when we see the world outside of us, when we imagine it, when we hallucinate, and probably also when we dream,” says Malach. “It also exemplifies the creative power of vision and the contribution of spontaneous brain activity to unprompted and creative behaviors,” he adds.

In addition to the scientific value of the work, Hahamy hopes it may raise awareness of Charles Bonnet syndrome, which can be frightening to those who experience it. “These individuals may keep their visual hallucinations a secret—even from doctors and family—and we want them to understand that these visions are a natural product of a healthy brain, in which the visual centers remain intact, even if the eyes have ceased to send them sensory input,” she says.


Explore furtherWhat is Charles Bonnet syndrome, the eye condition that causes hallucinations?


Journal information:BrainProvided by Weizmann Institute of Science

https://www.nature.com/articles/d41586-020-03574-w


Machine learning reveals the complexity of dense amorphous silicon

Transitions between amorphous forms of solids and liquids are difficult to study. Machine learning has now provided fresh insight into pressure-induced transformations of amorphous silicon, opening the way to studies of other systems.

Paul F. McMillan

  •  
  •  

PDF version

Machine-learning approaches are being developed to produce accurate simulations of the structure and chemical bonding of disordered solids and liquids, modelling a sufficient number of atoms to enable direct comparison with experimental data. Writing in Nature, Deringer et al.1 report their use of this approach to probe the structure of amorphous silicon under compression, as the element transforms from semiconducting to metallic states. Their work demonstrates that the structural transformations of amorphous forms of materials can take place much more gradually than those between crystalline phases, and can involve the formation of nanostructured domains and localized atomic arrangements that are not found in any of the crystalline states.

Silicon is one of a small class of elements whose density increases on melting2. This unusual behaviour is shared with crystalline ice, which floats on top of liquid water. Such unexpected reversal of solid and liquid densities has been linked to a phenomenon called polyamorphism — the ability of a substance to exist as different amorphous phases that have distinct structures and properties.Read the paper: Origins of structural and electronic transitions in disordered silicon

Liquid silicon is a metallic electrical conductor, whereas solid silicon is a semiconductor in ambient conditions, a fact that underpins its use in technologies ranging from computer chips to solar panels. The solid can adopt either a crystalline or a structurally disordered amorphous form at room temperature and pressure; in both cases, each atom bonds to four others in a tetrahedral arrangement. However, both the crystalline and amorphous solids transform into denser structures under compression, a process that is accompanied by a transition to metallic conducting behaviour.

In the 1970s, calorimetric experiments were carried out to study the energy changes that accompany the transformations between amorphous and crystalline forms of silicon during heating and cooling3. Analysis of the results suggested that two amorphous forms of silicon exist, with a phase transition between them. Simulations have since suggested that silicon transforms from a low-density amorphous (LDA) phase, in which the coordination number — the number of neighbouring atoms around each silicon atom — is four, to a high-density amorphous (HDA) phase whose structure is similar to that of metallic liquid silicon3,4. The LDA–HDA transition has been observed both during rapid heating of the amorphous solid and on compression of amorphous silicon at ambient temperature57.

Structural transformations between crystalline phases of silicon are readily observed using diffraction methods8, but those involving the amorphous state are more difficult to study because they occur less abruptly as density increases. This is where computational simulations come into play: they can visualize the arrangements of atoms in different phases, and predict and explain the resulting properties. The main challenge is always to model enough atoms to enable the comparison of simulation results with macroscopic data for real samples, while maintaining sufficient accuracy to describe the arrangements and bonding of the atoms.

Computational simulations are also limited by their characteristic timescales when studying phase transitions. Currently available computational resources often restrict simulations based on accurate quantum-mechanical calculations to systems that incorporate a few tens to hundreds of atoms, which are typically examined over timescales of up to several femto-seconds (1 femtosecond is 10−15 s). Simulations that use less computationally demanding modelling strategies can be extended to several hundreds or a few thousands of atoms. However, the accuracy of the predictions of the structural and physical properties of the material being investigated is sacrificed as the system size increases, or as the simulation time is extended.

Deringer et al. now describe a machine-learning approach that gives unprecedented levels of information about the structure and bonding energetics for a system of 100,000 silicon atoms as it is cooled from the liquid state and compressed to pressures up to 200,000 atmospheres (20 gigapascals). This represents a great increase in the number of atoms that can be modelled (see Extended Data Fig. 1 of the paper1). The accuracy of the method approaches those of the best simulations carried out from first principles using quantum-mechanical calculations.

Crucially, the modelled system was large enough to reveal the metastable aggregation of amorphous clusters of atoms. The simulations also uncovered crystallization phenomena that could not be observed using simulations with smaller numbers of atoms, or by using less-accurate models to describe the atomic interactions. The findings closely reproduce the temperatures and pressures observed experimentally for the macroscopic melting of silicon, for the other phase transitions, and for the onset of metallic behaviour.

The simulations show that the structural changes that occur on compression are more complex than was previously realized (Fig. 1). The atoms do not transform simultaneously between an arrangement with a coordination number of four to a phase that has a higher coordination number, as occurs in transitions between crystalline phases of silicon8. Instead, the amorphous structure evolves more gradually to produce high-density nanoscale domains of high coordination number, which develop within the original tetrahedral LDA structure. Linkages form between the HDA domains as the density increases, producing a material that exhibits bulk metallic conduction. It might be possible to modify this metallic conductivity in the real world by applying directionally oriented stresses to compressed silicon.

Figure 1 | Large-scale simulations of amorphous silicon. Deringer et al.1 used machine learning to simulate a system of 100,000 silicon atoms at increasing pressures and at 500 kelvin, to predict the resulting changes in structure and properties of the element. The colour of each atom indicates the number of other atoms bonded to it (the coordination number, N). The coordination number is four at ambient pressure. a, At 13.5 gigapascals, most atoms still have a coordination number of four, corresponding to a low-density semiconducting phase of silicon, but high-density regions of atoms with higher coordination numbers and metal-like conductivity have appeared. b, At 15 GPa, these regions predominate, and have coalesced to form a high-density metallic phase. c, At 17 GPa, domains of a new, very-high-density amorphous phase have started to form within the material. d, At 20 GPa, the amorphous structure has rapidly equilibrated to produce a crystalline metallic phase that has a simple hexagonal structure. (Images from Extended Data Fig. 4 of ref. 1.)

The authors’ simulations also show that, as the system is compressed further, it quickly collapses by about 25% of its volume, thus producing a very-high-density metallic state. This crystallizes rapidly on the nanosecond time-scale of the simulations, to form nanodomains of a metallic phase of silicon.

The results have implications for our understanding of how polyamorphic transitions might emerge between different liquid phases and glassy structures more generally9. Deringer and colleagues’ machine-learning approach has allowed them to accurately simulate amorphous silicon structures on a picosecond timescale (1 picosecond is 10−12 s), and over temperature and pressure ranges that are relevant to transitions between liquids and crystal phases, between liquids and glasses, and between two amorphous phases. Their findings therefore offer the chance to study transitions of a wide array of amorphous materials that have previously been difficult to probe.

The authors’ approach could also now be used to explore the possibilities of transforming amorphous silicon, or ‘doped’ materials in which silicon contains small amounts of other elements, to produce nanostructures that contain metallic and semiconducting domains. Such nanostructures could open up many opportunities for developing new technology, such as in electronic communication, data processing and energy harvesting.

Nature 589, 22-23 (2021)doi: https://doi.org/10.1038/d41586-020-03574-w

References

  1. 1.Deringer, V. L. et al. Nature 589, 59–64 (2021).
  2. 2.McMillan, P. F. J. Mater. Chem. 14, 1506–1512 (2004).
  3. 3.Sastry, S. & Angell, C. A. Nature Mater. 2, 739–743 (2003).
  4. 4.Durandurdu, M. & Drabold, D. A. Phys. Rev. B 64, 014101 (2001).
  5. 5.Aptekar, L. I. Sov. Phys. Dokl. 24, 993–995 (1979).
  6. 6.Ponyatovsky, E. G. & Barkalov, O. I. Mater. Sci. Rep. 8, 147–191 (1992).
  7. 7.McMillan, P. F., Wilson, M., Daisenberger, D. & Machon, D. Nature Mater4, 680–684 (2005).
  8. 8.McMahon, M. I. & Nelmes, R. J. Chem. Soc. Rev. 35, 943–963 (2006).
  9. 9.Machon, D., Meersman, F. Wilding, M. C., Wilson, M. & McMillan, P. F. Progr. Mater. Sci. 61, 216–282 (2014).

https://techxplore.com/news/2021-01-machine-paper-photonic-ai.html

Machine learning at the speed of light: New paper demonstrates use of photonic structures for AI

by Maggie Pavlick, University of Pittsburgh

Machine learning at the speed of light: New paper demonstrates use of photonic structures for AI
Illustration showing parallel convolutional processing using an integrated phonetic tensor core. New research published this week in the journal Nature examines the potential of photonic processors for artificial intelligence applications. Credit: XVIVO

As we enter the next chapter of the digital age, data traffic continues to grow exponentially. To further enhance artificial intelligence and machine learning, computers will need the ability to process vast amounts of data as quickly and as efficiently as possible.

Conventional computing methods are not up to the task, but in looking for a solution, researchers have seen the light—literally.

Light-based processors, called photonic processors, enable computers to complete complex calculations at incredible speeds. New research published this week in the journal Nature examines the potential of photonic processors for artificial intelligence applications. The results demonstrate for the first time that these devices can process information rapidly and in parallel, something that today’s electronic chips cannot do.

“Neural networks ‘learn’ by taking in huge sets of data and recognizing patterns through a series of algorithms,” explained Nathan Youngblood, assistant professor of electrical and computer engineering at the University of Pittsburgh Swanson School of Engineering and co-lead author. “This new processor would allow it to run multiple calculations at the same time, using different optical wavelengths for each calculation. The challenge we wanted to address is integration: How can we do computations using light in a way that’s scalable and efficient?”

The fast, efficient processing the researchers sought is ideal for applications like self-driving vehicles, which need to process the data they sense from multiple inputs as quickly as possible. Photonic processors can also support applications in cloud computing, medical imaging, and more.

“Light-based processors for speeding up tasks in the field of machine learning enable complex mathematical tasks to be processed at high speeds and throughputs,” said senior co-author Wolfram Pernice at the University of Münster. “This is much faster than conventional chips which rely on electronic data transfer, such as graphic cards or specialised hardware like TPUs (Tensor Processing Unit).”

The research was conducted by an international team of researchers, including Pitt, the University of Münster in Germany, the Universities of Oxford and Exeter in England, the École Polytechnique Fédérale (EPFL) in Lausanne, Switzerland, and the IBM Research Laboratory in Zurich.

Light-based processors boost machine-learning processing
Schematic representation of a processor for matrix multiplications that runs on light. Credit: University of Oxford

The researchers combined phase-change materials—the storage material used, for example, on DVDs—and photonic structures to store data in a nonvolatile manner without requiring a continual energy supply. This study is also the first to combine these optical memory cells with a chip-based frequency comb as a light source, which is what allowed them to calculate on 16 different wavelengths simultaneously.https://googleads.g.doubleclick.net/pagead/ads?guci=2.2.0.0.2.2.0.0&client=ca-pub-0536483524803400&output=html&h=280&slotname=8459827939&adk=582299054&adf=2318121742&pi=t.ma~as.8459827939&w=750&fwrn=4&fwrnh=100&lmt=1610050830&rafmt=1&psa=1&format=750×280&url=https%3A%2F%2Ftechxplore.com%2Fnews%2F2021-01-machine-paper-photonic-ai.html&flash=0&fwr=0&rpe=1&resp_fmts=3&wgl=1&uach=WyJNYWMgT1MgWCIsIjEwXzExXzYiLCJ4ODYiLCIiLCI4Ny4wLjQyODAuODgiLFtdXQ..&dt=1610050829180&bpp=213&bdt=6054&idt=1656&shv=r20201203&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3D8cecdddede42abe1-2246f22ee1c400e5%3AT%3D1605309730%3AS%3DALNI_MbXUQ88FlVZk4TvUQhQ1Wc84dxy5w&correlator=6192838700283&frm=20&pv=2&ga_vid=836539722.1605309732&ga_sid=1610050831&ga_hid=158890373&ga_fc=0&ga_wpids=UA-73855-17&u_tz=-480&u_his=1&u_java=0&u_h=1050&u_w=1680&u_ah=980&u_aw=1680&u_cd=24&u_nplug=3&u_nmime=4&adx=334&ady=2673&biw=1678&bih=900&scr_x=0&scr_y=0&eid=42530671%2C182982100%2C182982300%2C21068769&oid=3&pvsid=165302021308045&pem=171&ref=https%3A%2F%2Fnews.google.com%2F&rx=0&eae=0&fc=896&brdim=2%2C23%2C2%2C23%2C1680%2C23%2C1678%2C980%2C1678%2C900&vis=1&rsz=%7C%7ClEbr%7C&abl=CS&pfx=0&fu=8320&bc=31&ifi=1&uci=a!1&btvi=1&fsb=1&xpc=pjmlTL2dY9&p=https%3A//techxplore.com&dtd=1730

In the paper, the researchers used the technology to create a convolutional neural network that would recognize handwritten numbers. They found that the method granted never-before-seen data rates and computing densities.

“The convolutional operation between input data and one or more filters—which can be a highlighting of edges in a photo, for example—can be transferred very well to our matrix architecture,” said Johannes Feldmann, graduate student at the University of Münster and lead author of the study. “Exploiting light for signal transference enables the processor to perform parallel data processing through wavelength multiplexing, which leads to a higher computing density and many matrix multiplications being carried out in just one timestep. In contrast to traditional electronics, which usually work in the low GHz range, optical modulation speeds can be achieved with speeds up to the 50 to 100 GHz range.”

The paper, “Parallel convolution processing using an integrated photonic tensor core,” was published in Nature and coauthored by Johannes Feldmann, Nathan Youngblood, Maxim Karpov, Helge Gehring, Xuan Li, Maik Stappers, Manuel Le Gallo, Xin Fu, Anton Lukashchuk, Arslan Raja, Junqiu Liu, David Wright, Abu Sebastian, Tobias Kippenberg, Wolfram Pernice, and Harish Bhaskaran.


Explore furtherPhoton-based processing units enable more complex machine learning


More information: J. Feldmann et al. Parallel convolutional processing using an integrated photonic tensor core, Nature (2021). DOI: 10.1038/s41586-020-03070-1Journal information:NatureProvided by University of Pittsburgh

https://www.cnet.com/how-to/apple-tv-11-essential-tips-to-master-your-streaming-box/

Apple TV: 11 essential tips to master your streaming box

The Apple TV is a seemingly simple device that’s gained so many new features over the years. Here’s the latest.

Jason Cipriani headshot

Taylor Martin headshot

Jason CiprianiTaylor MartinJan. 7, 2021 3:15 a.m. PT

Device And Controller Product Shoot
The Apple TV remote gets the job done, as long as you don’t lose it. Edge Magazine

There’s little doubt that the Apple TV is in desperate need of a refresh. Rumors currently point to a new streaming box arriving sometime this year with improved gaming features, an easier to find remote and a more powerful processor inside the small black box. 

Even though a new Apple TV is possibly on the horizon, the current Apple TV lineup is worth the investment for Apple fans and users. And once you get the shiny new box setup, there are some things you’ll need to learn. For instance, getting around the Siri remote can feel simplistic, but there are some hidden shortcuts that will surely make your life easier. 

Everything Apple

CNET’s Apple Report newsletter delivers news, reviews and advice on iPhones, iPads, Macs and software. Yes, I also want to receive the CNET Insider newsletter, keeping me up to date with all things CNET.SIGN ME UP!

By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.

You can even use your AirPods or a HomePod with the Apple TV to listen to your favorite show or movie. Below you’ll find 11 tips and tricks to get the most out of your Apple TV.

Useful gestures and tricks for the Siri remote 

The Apple TV remote is in desperate need of an overhaul. It’s small, easily lost, and the trackpad that’s used to tap and swipe your way through the Apple TV’s interface can be frustrating. But, it’s what you’re stuck with. So, in an effort to make the most out of the situation, below are some of the gestures and tips we’ve learned from our time using the Siri remote. 

Of course, you can use the power of Siri by pressing and holding the Siri button (the one with the microphone icon on it) and speaking a command. You can say things like, “Jump forward 10 minutes,” “Get me some new shows on Netflix” or “Who directed this?” The list of Siri commands for Apple TV is expansive.

Still, the true beauty of the Siri remote are all the hidden hotkey functions.

  • Long pressing the Home button (the one with the TV icon on it) will bring up a slide over menu, where you can put the TV to sleep, change user accounts, view your HomeKit devices and camera feeds, as well as access AirPlay devices (more on this in a minute). 
  • Pressing Play/Pause while typing acts as a shift key. 
  • double-click of the Menu button while on the home screen will start the screen saver.
  • Long-press on the touchpad to activate jiggle mode to reorganize or delete apps. 
  • Double-pressing the Home button opens the app switcher, which you can then swipe through all of the open apps and even force-close them if you’re having issues. 
grannysmith-11.jpg
You don’t have to use your TV’s full remote. With a couple of settings tweaks, you can use the Apple TV to control everything. Sarah Tew/CNET

Ditch the remote that came with your TV

The Siri remote can almost totally replace the remote for your television (at least when using the Apple TV). It’s generally enabled automatically, but if your Siri remote isn’t controlling your TV’s volume, go to Settings > Remotes and Devices and make sure Control TVs and Receivers is set to On. If the volume control still isn’t working, click Volume Control and click Auto.

The Apple TV can also power on and off almost television, but this is dependent upon the television itself. You will need to enable CEC, which has over a dozen different names depending on your TV brand.

Once enabled, when you press a button on the Siri remote, your television should power on. And when you sleep the Apple TV, your TV should also power off.

airpods-apple-tv
AirPods and Apple TV go together like peanut butter and jelly. Jason Cipriani/CNET

Use your AirPods with your Apple TV

The next time you want to binge your favorite new series late at night without disturbing your partner, connect your AirPods to your Apple TV. Your AirPods should already be paired with the streaming box, so all you need to do is put your AirPods on, then long-press the TV button, select the AirPlay icon and then scroll down and click on your AirPods. 

All audio will be beamed from the Apple TV straight to your ears, letting everyone else around you get some sleep while you watch just one more episode

tvos14-use-homepod-as-tv-speaker
Use your HomePod as TV speakers for improved sound. Apple

Or you can use your HomePod for better sound

If you have full-size HomePods, you can use the smart speakers to create a home theater sound system. Alternatively, if you only have a HomePod Mini, you can stream your Apple TV’s audio through the speaker for improved sound. 

There’s a few different ways of getting the home theater feature working, but here’s the gist: You’ll need an Apple TV 4K, one or two of the bigger HomePod model.

Make sure the HomePod and Apple TV are located in the same room in the Home app on your iPhone or iPad. If you have two HomePod speakers you want to use as a stereo pair, you’ll need to create that pair before assigning them to the same room as your TV. 

The next time you turn on your Apple TV, you’ll be asked if you want to use the HomePod as your TV speakers. If you aren’t prompted, open the Settings app on your Apple TV and go to Video and Audio > Default Audio Output and select the speaker you want to use.

PS4vsXBOXONE_08.jpg
You can pair true gaming controllers to the Apple TV for gaming. Sarah Tew/CNET

Pair Bluetooth devices

If you don’t have AirPods but have a pair of Bluetooth headphones, you’re not left out. Whether you’re looking to play some casual games with a controller or watch a TV show without disturbing others late at night, Bluetooth is your best friend.

While you can play some simple games with the Siri remote, the Apple TV is compatible with MFi (Made for iPhone) controllers as well as a select list of Xbox and PlayStation controllers.  To pair a controller, turn it on and put it into pairing mode. Then, on the Apple TV, go to Settings > Remotes and Devices > Bluetooth. Look for the controller to appear under Other Devices and click it. Now when you turn it on, it will automatically connect to the Apple TV.

The same goes for Bluetooth headphones, which is especially helpful to late night watching if you don’t want the sound of the television keeping everyone in the house up. And you can replace the Siri remote with a Bluetooth keyboard, if you so wish. Just put the device in pairing mode, then visit the Bluetooth section in the Apple TV settings to finish the pairing process. 

Watch this: The Apple TV is great. Here’s how to make it even better 1:08

Force a reboot

While the Apple TV works almost flawlessly most of the time, things can go awry from time to time. Apps can freeze or stop working. We’ve previously covered five common Apple TV problems (and how to fix them), but your best friend will almost certainly be the force reboot option. There are two ways to do this:

  • Go to Settings > System and click Restart.
  • Or press and hold both the Menu and TV buttons until the light on the front of the Apple TV begins blinking rapidly. Release the buttons and the Apple TV will reboot.

Use your iOS device as a remote

Speaking of replacing the Siri remote, if you happen to misplace it or break it, replacing it is a cool $59.

But you can avoid replacing it altogether. Built into your iPhone or iPad is the Apple TV remote app, and if you’re using your own profile to stream and watch shows, you’ll notice that your lock screen automatically displays controls. You have pretty much all the functions of the Siri remote, including volume control. 

It also comes with the handy option to let you type searches and other text input using the onscreen keyboard on your iOS device, instead of having to hunt and peck with the on-screen Apple TV keyboard. You can even use it to sign into apps with a password manager like iCloud Keychain or 1Password. 

apple-tv-homekit
View your HomeKit devices on your Apple TV, including live video streams. Jason Cipriani/CNET

Control your smart home devices

If you have any smart home devices that work with HomeKit, your Apple TV will act as a hub for them, and you can use the Siri remote to control your house.

Setup is a breeze — just connect your HomeKit devices to the Home app on your iOS device and make sure the Apple TV is signed in to the same iCloud account as you use on your phone. To make sure it’s set up properly, go to Settings > AirPlay and HomeKit and look to see that it’s connected as a hubIf not, double check the user account by going to Settings > Users and Accounts > click on your account name then select iCloud to see what email address it’s using. 

As long as all of that is working, you should be able to press the Siri button on the remote and tell it to turn your lights on or off, or unlock a door — depending on what kind of HomeKit devices you have around your hone. 

The real beauty of HomeKit working with Apple TV is that your Apple TV will work as a remote hub, so you can control your home while you’re away.

If you have any HomeKit compatible cameras, you can watch live streams or get alerts when someone rings your doorbell, for example, right on your TV. Just long-press the TV button, highlight and click the HomeKit icon to take control of your hone. 

51-apple-tv-app-2019
Use the Apple TV app as your go-to spot for finding what to watch next, regardless of which app owns the show. Sarah Tew/CNET

Take advantage of the TV app

The Apple TV app has become the centralized hub for all the stuff you watch, or might want to watch. When you open an app like Discovery Go, you’ll be asked if you want to give the TV app access to the app. Doing so will allow it to track the shows you’re watching and put new episodes in front of all the other stuff under the Up Next section. The TV app is also where you will buy new TV shows or movies and where you can find all your previously purchased items.

Perhaps more importantly for some, this is also where you’ll find all of the Apple TV Plus shows and movies that Apple releases. 

It’s easy to gloss over the TV app, but if you take the time to sign in to your TV provider and include all the streaming apps you use, it will become a helpful tool that will reduce the time you spend clicking around to pick up where you left off or finding something new to watch.

Automatically install apps

If you download a new app to your iPhone, it can be automatically installed on your iPad. The same thing can be enabled with your Apple TV — assuming there’s an Apple TV app for something you’ve installed.

To enable this, go to Settings > Apps and click Automatically Install Apps to switch it to On.

delete-apps-apple-tv
This is a much easier way to delete apps. Jason Cipriani/CNET

Quickly delete apps

You might find yourself going on an application installing spree with your new Apple TV. Or you might have automatically downloaded a bunch of apps you installed on your phone that you never use on your Apple TV.

If you need to quickly free up storage on your Apple TV, you could go through and delete each app individually from the home screen, which is painstakingly slow and cumbersome. Or you could go to Settings > General > Manage Storage to find a list of installed applications, organized from largest to smallest. There you can click the trash can icon next to each app you want to remove and click Delete to confirm.

Once you’ve mastered your Apple TV, make sure to check out our learn more about Apple TV PlusApple recently launched Fitness Plus, a workout service that uses the Apple TV to bring instructors into your living room, if you have an Apple Watch, that is.