https://www.photonics.com/Articles/Injectable_Biosensor_Converts_Brain_Activities_to/a67210

Injectable Biosensor Converts Brain Activities to Detectable Optical Signals

Facebook
Twitter
LinkedIn
Email
Comments

    WASHINGTON, D.C., July 30, 2021 — Technology for monitoring neuronal activity noninvasively could allow scientists to study the brain without the need for surgery or implanted devices. The newly developed nanotechnology, called NeuroSWARM3 by its inventors at the University of California, Santa Cruz (UCSC), could even provide a bridge to more effective communication and interaction for those who are physically challenged.

“NeuroSWARM3 can convert thoughts (brain signals) to remotely measurable signals for high-precision brain-machine interfacing,” professor Ali Yanik said. “It will enable people suffering from physical disabilities to effectively interact with [the] external world and control wearable exoskeleton technology to overcome limitations of the body. It could also pick up early signatures of neural diseases.”

NeuroSWARM3 — short for neurophotonic solution-dispersible wireless activity reporters for massively multiplexed measurements — is made up of engineered electro-plasmonic nanoparticles that convert electrical signals in the brain to optical signals that can be tracked with an optical detector located outside the body. The “system on a nanoparticle,” which is similar in size to a viral particle, includes wireless powering, electrophysiological signal detection, and data broadcasting capabilities all in one device.

To enable contactless measurement of brain activity, the nanoparticles that make up NeuroSWARM3 are injected into the bloodstream or directly into the cerebrospinal fluid. Once in the brain, the nanoparticles are highly sensitive to local changes in the electric field and can function indefinitely without a power source or wires.

The nanoparticles consist of a silicon oxide (SiO2) core measuring 63 nm across and a thin layer of electrochromically loaded poly (3, 4-ethylenedioxythiophene). A 5-nm-thick gold coating covers the nanoparticles and enables them to cross the blood-brain barrier.

The optical signals generated by the nanoparticles are detected using near-infrared light at wavelengths between 1000 and 1700 nm.

Experiments showed that in vitro prototypes of NeuroSWARM3 could generate a signal-to-noise ratio of over 1000 — a sensitivity level that is high enough to detect the electrical signal generated when a single neuron fires.

The researchers compare NeuroSWARM3 to a nanosize, electrochromically loaded plasmonic (electro-plasmonic) antenna operated in reverse. Its optical properties are modulated by the spiking electrogenic cells within its vicinity, rather than by a predefined voltage.

“We pioneered [the] use of electrochromic polymers — for example, PEDOT:PSS — for optical, wireless detection of electrophysiological signals,” Yanik said. “Electrochromic materials known to have optical properties that can be reversibly modulated by an external field are conventionally used for smart glass/mirror applications.”

Although other methods are available for tracking the brain’s electrical activity, most require surgery or implanted devices to penetrate the skull and interface directly with neurons.

A similar approach to NeuroSWARM3 uses quantum dots to respond to electrical fields. When the UCSC researchers compared the two technologies, they found that NeuroSWARM3 generated an optical signal that was four orders of magnitude larger than the quantum dot technology, and that the quantum dots required 10× higher light intensity and 100× more probes to generate an equivalent signal.

“We are just at the beginning stages of this novel technology, but I think we have a good foundation to build on,” Yanik said. “Our next goal is to start experiments in animals.”

The researchers presented NeuroSWARM3 at the virtual OSA Optical Sensors and Sensing Congress.

https://www.medicalnewstoday.com/articles/how-did-our-dreams-change-when-covid-19-lockdowns-ended#Tips-for-better-sleep

How did our dreams change when COVID-19 lockdowns ended?

Our dreams have changed both in content and frequency during and after lockdowns, a study has found. kkgas/Stocksy United
  • The coronavirus pandemic has affected our sleep quality and patterns, and our dreams can reflect this impact.
  • A study in Italy analyzed people’s dreams during and after lockdown to see if there were any changes.
  • In both periods, individuals reported disturbed sleep, negative emotions, and pandemic-related nightmares.
  • The researchers found that people had richer and more lucid dreams during lockdown but more dreams post-lockdown.
  • The study adds to existing research showing the link between emotionally intense life events, stress, and sleep.

Whether it’s a feeling of being trapped, overall frustration, anxiety, or living in an alternate reality, the coronavirus pandemic has awakened interesting and uncomfortable feelings in many. The recurring cycle of curfews, lockdowns, and reopenings have also become added burdens on mental health.

One of the ways the human body has tried to cope with this flood of overwhelming emotions and containment measures has been through dreams.

Many people who had almost nonexistent or rather dull dream worlds pre-pandemic started to report richer, longer and more frequent, bizarre, and vivid dreams.

Meanwhile, more individuals reported feeling negative emotions such as sadness, anger, and loneliness during sleep.

Researchers from Italy have investigated the impact of lockdown as a factor, and their findings appear in the Journal of Sleep ResearchTrusted Source.

Dreams and well-being

Dr. Serena Scarpelli and her team from the Sapienza University in Rome were observing an interesting trend on social media in 2020, one in which people were sharing reports of their dreams on these platforms, right from the beginning of the first lockdown.

In these reports, individuals claimed to have been experiencing more dreams, which were increasingly more bizarre and vivid. That was when the researchers decided to investigate this “pandemic dreams” phenomenon in a systematic way.

Dr. Scarpelli told Medical News Today that sleep quality and dream activity were important indices of a person’s well-being.

“Just think, for example, that the presence of nightmares is a symptom of post-traumatic stress disorder (PTSD). We are seeing this in all the pandemic studies, and monitoring dream variables over time will certainly give us more information,” she said.

Recent studies published in the journal Nature and Science of Sleep have also suggested isolation may influence psychological distress. However, it did not affect sleep quality in the symptoms researchers measured.https://6c4318ce44ed98220df7f2d6ae5eee18.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.htmlADVERTISEMENTSee a counsellor from your phone

If “COVID fatigue” has you feeling not quite yourself lately, you’re not alone. With the TELUS Health MyCare™ app, you can see a counsellor from your phone, even on evenings and weekends.GET THE APP

What did the study find?

The studyTrusted Source looked into 90 subjects aged 19-41 years, a majority of which were women, and asked them to fill out sleep-dream diaries in the morning and answer online surveys over 2 consecutive weeks.

The first week was while Italy was still in full lockdown, and the second was when its government eased restrictions.

Italy was the first country to confirm a coronavirus case outside of China, where it first emerged. The country saw infections rise in a matter of months, leaving its unprepared health system overwhelmed.

Italy went into a nationwide lockdown between March and May. Web surveysTrusted Source conducted during this period showed that over half of the population reported poorer sleep, more sleep disturbances, and taking hypnotic medication to remedy this.

In light of previous research, the Italian researchers hypothesized that just as lockdowns affected the quality and quantity of our dreams, so would the easing of such strict measures.

Here is a breakdown of their findings.

Lockdown vs. post-lockdown dreams

As people’s sleeping patterns changed during lockdown, such as from getting up later and not having to commute to work, dreams also changed.

Sleeping for longer also increases REM sleep — the stage of sleep involving heightened brain activity, which could lead to more vivid dreams.

According to the data that the researchers collected, Italians awoke more at night, had more trouble falling asleep, recalled more dreams, and had lucid dreams more often during lockdown.

Overall, people reported poorer sleep quality, while over 50% of the participants showed anxiety and PTSD symptoms during sleep.

Bad sleep, or waking up throughout the night, can also cause more lucid dreams, studies have found.

During lockdowns, lucid dreams acted as a coping mechanism to help people deal with the reality of confinement, the researchers said.

However, in the post-lockdown period, individuals had more dreams, including those being in crowded places.

Women vs. men

The authors highlighted that, like other studies of this nature, the low number of male participants makes it unrepresentative of the whole population.

Indeed, 80% of the subjects in this study were women.

Dr. Scarpelli and her team found that, compared to men, women recalled more dreams and experienced more negative emotions during sleep.

However, she added:

“I believe that COVID-19 has impacted both men and women identically. However, it must be remembered that some trait factors influence dream activity, and gender and age are among them.”

Dream contents

Apart from classics, such as teeth falling out, being nude in public, and falling, people saw more pandemic-related dreams in lockdown, including contracting a viral infection, having breathing problems, and suffocating.

The post-lockdown period, in contrast, individuals saw more dreams about being in large crowds or traveling. This could have associations with the easing of restrictions surrounding these areas and fears about returning to pre-pandemic normals, the researchers said.

Similar studies have compared dreams experienced during the pandemic and before the first outbreak.

studyTrusted Source conducted in Canada observed 5,000 people and their sleeping habits. Researchers found three trends with sleep: individuals either spent “extended time in bed” or “reduced time in bed” or had “delayed sleep.”

They also noticed changes in sleeping medication use during the pandemic, compared to pre-outbreak estimates.ADVERTISINGMEDICAL NEWS TODAY NEWSLETTERKnowledge is power. Get our free daily newsletter.

Dig deeper into the health topics you care about most. Subscribe to our facts-first newsletter today.Enter your emailSIGN UP NOW

Your privacy is important to us

A means to cope with ‘collective trauma’

Dr. Scarpelli and her team also revisited the “continuity hypothesis” in their research. Rather than everyday events of minimal significance, they theorized that personal concerns and events of high emotional intensity continued to affect us while we sleep and become incorporated into our mental sleep activity.

And as dreaming and memory processes are interrelated, the study’s findings confirmed the pandemic was a “collective trauma,” manifesting as changes to dreaming, the authors said.

Although we can speak of collective trauma in a global pandemic, it is important to point out that not everyone will have the same experience or react to the same degree.

A majority of people will return to normal and their pre-pandemic routines and patterns once the pandemic is truly over, said Deirdre Barrett, Ph.D., the author of Pandemic Dreams.

But she said three groups were likely to suffer negatively from the pandemic, even when it ends.

The first group of people likely to experience trauma and recurring nightmares is healthcare workers, specifically those working on the frontlines at emergency rooms and intensive care units. Second, those who experienced personal losses during the pandemic, and third, those with any sort of anxiety disorder, Barrett told MNT.

Dr. Scarpelli said the impact of lockdowns on sleep quality is incontrovertible, and those who have suffered the most, in this sense, have been those who have undergone major life changes because of the pandemic.

This could be either those who have tested positive for SARS-CoV-2 or individuals who have lost their jobs or loved ones, she said.

‘It is therefore possible that in the long term, we will see a split between those who will return to a sort of ‘normality’ and those who, having had greater consequences [from the pandemic in their lives], will report sleep problems for a long time.”
– Dr. Serena Scarpelli

Those who have contracted the infection may also be facing the added challenge of long COVID, which health experts define as a series of symptoms ⁠— the most common of which are fatigue, joint pain, and brain fog ⁠— that persist long after the initial infection.

“Inevitably, all these aspects can affect the quality of life of the individual and therefore also the quality of sleep and dream activity.”https://6c4318ce44ed98220df7f2d6ae5eee18.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

Tips for better sleep

If you are having a hard time falling or staying asleep, there are a few things that experts recommend.

Reading or watching something soothing at bedtime could help you drift off quicker, but according to Barrett, it is best to avoid scary movies or anything about COVID-19.

As for physical tension in the body, deep, mindful breathing that activates abdominal muscles and progressive muscle relaxation can bring about calm.

However, if anxiety-filled dreams are the problem, Barrett recommends actively trying to have pleasant dreams.

“The best way is to think of what dreams you would like to have: Dream of a loved one, favorite vacation spot, or many people enjoy flying dreams. Or maybe you have one all-time favorite dream.”

This, or the act of suggesting yourself topics to dream about, is called “dream incubation.”

“Think of that favorite person, place, or flying. Or replay that very favorite dream in detail,” said Barrett, and added to strengthen your incubation, “repeat to yourself what you want to dream about as you drift off to sleep.”

Those not so good at visualizing people, objects, or concepts may benefit from visual stimulants and cues before they fall asleep.

“If images don’t come easily to you, place a photo or other objects related to the topic on your nightstand to view as the last thing before turning off your light,” she said.

https://medicalxpress.com/news/2021-07-tracking-circadian-rhythms-smartwatch.html

Tracking circadian rhythms from your smartwatch

by University of Michigan

smartwatch
Credit: CC0 Public Domain

Smartwatches are handy devices for people to keep track of the number of steps they take per day or to track their mile time during a run. But they are also opportunities for scientists to understand people’s physiological processes while they are going about their everyday lives.https://621e451c3c39b8017dc320f240b74421.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

In particular, scientists have been interested in tracking people’s circadian rhythms through the biological data gathered by their smartwatches—specifically, their heart rate. Doing so would allow individuals to know the best times of day to sleep, eat, exercise or take their medications.

At night, a person’s heart rate lowers in order to conserve energy. During a person’s waking period, their heart rate speeds up in anticipation of activity. But the challenge has been figuring out a way to find the throughline of a person’s heart rate among all of the ways it varies throughout the day, says Daniel Forger, a professor of mathematics at the University of Michigan.

Now, Forger and his colleagues have developed a statistical method that accounts for all of the “noise” that might affect a person’s heart rate and extracts a person’s circadian rhythm based on heart rate data provided by their smart watch.

The circadian rhythm is an internal clock that synchronizes all of the physiological functions in the body. The master clock, located in the brain’s hypothalamus, oversees all of the millions of other internal clocks in your body: Each cell has an internal clock, as does your heart, the liver and the brain. In healthy individuals, these clocks are all in synchrony. But studying this master clock is difficult—especially outside of a lab setting.

“I think a big question has been, can we measure circadian rhythms with wearables, and how can we do that?” said Forger, also a research professor of computational medicine and bioinformatics at Michigan Medicine. “Heart rate itself has a circadian rhythm, but it’s complicated by a lot of different things: You lie down and go to sleep, and your heart rate drops. You go running and your heart rate goes way up.

“The hard question was, how do we pull out that internal timekeeping signal to know what time of day your body thinks it is from all of those other signals out there?”

The group’s algorithm works by discarding data collected during sleep and focusing on data collected during a person’s waking period. Then, the algorithm, developed by study co-author and former U-M postdoctoral researcher Clark Bowman, takes into account whether a person’s heart rate is affected by the person’s activity or cortisol because of exercise, posture or meals. The result is the underlying daily timekeeping signal controlling heart rate.https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-0536483524803400&output=html&h=188&slotname=7099578867&adk=4039075515&adf=1873531024&pi=t.ma~as.7099578867&w=750&fwrn=4&lmt=1627701527&rafmt=11&psa=1&format=750×188&url=https%3A%2F%2Fmedicalxpress.com%2Fnews%2F2021-07-tracking-circadian-rhythms-smartwatch.html&flash=0&wgl=1&uach=WyJtYWNPUyIsIjEwXzExXzYiLCJ4ODYiLCIiLCI5Mi4wLjQ1MTUuMTA3IixbXSxudWxsLG51bGwsbnVsbF0.&tt_state=W3siaXNzdWVyT3JpZ2luIjoiaHR0cHM6Ly9hdHRlc3RhdGlvbi5hbmRyb2lkLmNvbSIsInN0YXRlIjo3fV0.&dt=1627701523613&bpp=189&bdt=10860&idt=2951&shv=r20210728&mjsv=m202107290101&ptt=9&saldr=aa&abxe=1&cookie=ID%3D159a91dc538ead62-22cf61eea6c20048%3AT%3D1625265569%3AR%3AS%3DALNI_MY08Mu3M9fMdpXjBWAEus1ZruCp7w&correlator=1973753691571&frm=20&pv=2&ga_vid=1534776174.1526672041&ga_sid=1627701527&ga_hid=58611799&ga_fc=0&ga_wpids=UA-73855-15&rplot=4&u_tz=-420&u_his=1&u_java=0&u_h=1050&u_w=1680&u_ah=980&u_aw=1680&u_cd=24&u_nplug=3&u_nmime=4&adx=335&ady=2308&biw=1680&bih=900&scr_x=0&scr_y=0&eid=42530672%2C20211866%2C21067496&oid=3&pvsid=1959256301116479&pem=424&ref=https%3A%2F%2Fnews.google.com%2F&eae=0&fc=896&brdim=0%2C23%2C0%2C23%2C1680%2C23%2C1680%2C980%2C1680%2C900&vis=1&rsz=%7C%7CpeEbr%7C&abl=CS&pfx=0&fu=128&bc=31&ifi=1&uci=a!1&btvi=1&fsb=1&xpc=ZdAiF4lrdG&p=https%3A//medicalxpress.com&dtd=4252

To test whether this statistical method worked, the group used a dataset from an ongoing study of medical interns, called the Intern Health Study. The study provides more than 130,000 days of data from 900 interns who continuously wore wrist-based sleep-tracking devices collecting motion and heart rate data. Medical interns are good subjects to use in this kind of research because they are shift workers, which means sometimes their work shifts change from day to night from week to week.

“Smart watches collect heart rate data using optical sensors, which aren’t very accurate, and there are so many things affecting heart rate throughout the day that measurements tend to be all over the place, so it’s a big result to be able to identify a circadian rhythm in that kind of data at all,” said Bowman, now a professor of mathematics and statistics at Hamilton College.

“It’s only possible since smart watches take measurements so frequently and provide information about activity to help account for cardiac demand. The sheer amount of data is just enough to see the background trend of heart rate subtly rising and falling in time with a circadian clock.”

In one example in the study, a person’s sleeping and waking patterns, as demonstrated by their heart rate, adjust quickly to their changing work schedule. This means their circadian rhythm was able to quickly adjust to a bedtime and waking time that was almost opposite of what they had previously been experiencing.

Another individual’s data, however, showed a different story. Their circadian rhythm lagged behind their adjusted sleep schedule, which likely means they were feeling pretty sluggish during the time they were adjusting to their new waking schedule.

This sluggishness is the same effect that those with jet lag experience. Jet lag can occur when a person’s heart rate isn’t in sync with their waking period. A slower heart rate can make a person feel sleepy or sluggish.

Forger says the strength of using data from wearables means scientists, as well as the person wearing the watch, can study a person’s circadian rhythm based on real-world influences, which has certain advantages over measuring circadian rhythm in a lab. The gold standard clinical study to study circadian rhythm would be to measure a person’s melatonin levels over a period of six to 40 hours in a dark lab.

“We’ve shown that you can take a wearable signal and directly measure circadian rhythms in the real world, and the real world has so many things that affect circadian rhythms that you aren’t going to measure in the lab,” Forger said, explaining that some of the data the team has had to account for are extended periods of intense activity (a semiprofessional cyclist, for example). The method also does not account for effects on heart rate such as caffeine, psychological stress, pharmaceuticals and disease.

“There are some scenarios that you have to always be a little more careful with, using real world data,” he said. “But again we’re going to pick up other things you may not experience in a lab.”

The researchers also developed the Social Rhythms app, available for iPhone and Android devices, where you can upload your wearable data and receive a report on how your internal circadian clock has changed recently.

“Measuring that signal not only provides information about the body’s circadian timing, but also characterizes how each individual’s heart rate behaves,” Bowman said. “We can use this information to track how the body adjusts to new schedules, study how physical activity affects each individual’s heart rate slightly differently, and even quantify the effect of being active at different times of day on the body’s internal clock.

“Smart watch users could have real-time information on their circadian clock to help adjust to jet lag or shift work, manage circadian disorders or identify abnormalities in heart rate which might present health risks.”

Co-authors of the study include Yitong Huang of Dartmouth College, and U-M researchers Olivia Walch, Yu Fang, Elena Frank, Jonathan Tyler, Caleb Mayer, Christopher Stockbridge, Cathy Goldstein and Srijan Sen.


Explore furtherA blood test for your body clock? It’s on the horizon


More information: Clark Bowman et al, A method for characterizing daily physiology from widely used wearables, Cell Reports Methods (2021). DOI: 10.1016/j.crmeth.2021.100058

App link: apps.apple.com/us/app/social-rhythms/id1510826025Provided by University of Michigan

https://www.mindbodygreen.com/articles/products-that-can-help-you-fall-asleep

Having Trouble Falling Asleep? 9 Science-Backed Products That May Help

mbg Spirituality & Relationships WriterBy Sarah Regan

Image by Leah Flores / StocksyOur editors have independently chosen the products listed on this page. If you purchase something mentioned in this article, we may earn a small commission.July 29, 2021 — 23:05 PMShare on:

If you’re lucky, getting a good night’s rest doesn’t take much, and you can fall asleep (and stay asleep) easily. For the rest of us, these tools can provide a little extra help. There are plenty of lifestyle tweaks that can help you settle in at bedtime, but if you want to go the extra mile, the following nine products might just do the trick:

1. A sleep supplement

sleep support+The deep and restorative sleep  you’ve always dreamt about*★ ★ ★ ★ ★★ ★ ★ ★ ★ (4.8)SHOP NOW

From magnesium to melatonin, there are a variety of sleep supplements on the market today that can help with specific sleep needs.* For a supplement that combines the benefits of magnesium, PharmaGABA®, and jujube, consider mbg’s sleep support+.* The science-backed formula is designed to help you fall asleep faster, stay asleep longer, and wake up feeling rested—and it has hundreds of five-star reviews to its name.*ADVERTISEMENT

2. Blackout curtains

If you struggle to stay asleep as soon as the sun peeks over the horizon, blackout curtains might be a worthy investment. They’ll keep your room dark as long as you need, so you can wake up when you want to. (And of course, there are other options for blocking sunlight too, like an eye mask or a large plant in front of your window.)

3. Blue light glasses or circadian-friendly lighting

Blue light signals to the brain that it’s daytime, and with the abundance of screens in our lives, it’s important to nix the blue light as bedtime approaches. You can try blue-light-blocking glasses, or consider swapping out your light bulbs for warmer, more circadian-friendly varieties, like the dimmable Wind-Down light bulbs by Brilli or GE’s Relax HD bulb.

4. A white noise machine

Have trouble sleeping in total silence? Enter: the white noise machine. Not only are these little machines typically priced pretty fairly, but they carry a variety of noises that can soothe your senses. Whether you like the sound of crashing waves, rainfall, or just a low background hum, there’s something for everyone. We’re big fans of the SNOOZ White Noise Sound Machine, as it’s energy-efficient and has an accompanying app that lets you control the volume and schedule. The Douni Sleep Sound Machine and Zenergy Portable Machine are also great picks.

5. An essential oil diffuser

Certain essential oils can work wonders for calming you down, like lavender and jasmine. In fact, one study found that smelling lavender oil before bed increased subjects’ time in deep sleep. And with an essential oil diffuser in your bedroom, you can fall asleep to the soothing scent of your choice. (Check out our full guide on the types of diffusers available today, plus tips for diffusing to keep in mind.)

6. Candles

There’s nothing like the ambience of a couple of lit candles in a dark room—and the effect is actually perfect for bedtime. Not only does the dim lighting signal that it’s time to wind down, but you can incorporate candles into the rest of your nighttime routine (whether it’s meditating, moving through a slow yoga flow, or reading a good book). Just be sure to opt for nontoxic options like the Pure Calm Wellness Candle by Uma or any of Keap’s nature-inspired scents, to keep the air quality in your room up to snuff. And of course—remember to blow them out before you fall asleep.

7. A journal

Journaling is a great way to get your thoughts out on paper, reflect on your day, and simply relax. For extra sleep-supporting power, consider writing out your to-do list for the next few days before bed. It might sound counterintuitive to think about the things you need to do, but writing them down before bed has been shown in research to help people fall asleep faster. And it doesn’t hurt that there are tons of cute journals available today, like The Five Minute Journal by Intelligent Change, or the classic Passion Planner.

8. A weighted blanket

If you’ve never buried yourself beneath a weighted blanket, it’s quite the experience. Research has even found they can help some people fall asleep faster. There’s something super comforting about feeling swaddled by your blanket, and with cooling options available, like the True Temp™ and Casper weighted blankets, you can stay comfy and cool.

9. A sleep tracker

And last but not least, one of the best ways you can know how well you’re sleeping is with a sleep tracker. If your tracker lets you know you’ve been spending too much time in light sleep, for example, you can make the necessary adjustments to reach those deeper stages (i.e., eating dinner earlier, skipping the nighttime glass of wine, etc.). Consider trying out the Oura RingWhoopFitbit, or any of the other sleep trackers on the market today.

Keep in mind…

While these products can be very helpful, none of them can make up for lifestyle factors that aren’t conducive to sleep. Excessive alcohol or food consumption before bed spells disaster for a good night’s sleep, as does caffeine, lack of exercise, a hot bedroom, and stress. But once you’ve got your sleep routine in a good place, these options can only help.

https://9to5mac.com/2021/07/29/philips-hue-gradient-lightstrip-ambiance/

Philips Hue Gradient Lightstrip Ambiance reportedly on the way

Ben Lovejoy

– Jul. 29th 2021 4:02 am PT

@benlovejoy

9 Comments

Lightstrips are one of the most flexible and popular Hue products, and there’s an even better version on the way: the Philips Hue Gradient Lightstrip Ambiance.

While existing Lightstrips let you choose from 16 million colors, the Gradient takes this to a new level …

https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-2445248216782983&output=html&h=280&adk=466652548&adf=938554&w=336&fwrn=4&fwrnh=100&lmt=1627621366&rafmt=1&psa=1&format=336×280&url=https%3A%2F%2F9to5mac.com%2F2021%2F07%2F29%2Fphilips-hue-gradient-lightstrip-ambiance%2F&flash=0&host=ca-host-pub-5506057612223327&fwr=0&rpe=1&resp_fmts=3&wgl=1&uach=WyJtYWNPUyIsIjEwXzExXzYiLCJ4ODYiLCIiLCI5Mi4wLjQ1MTUuMTA3IixbXSxudWxsLG51bGwsbnVsbF0.&tt_state=W3siaXNzdWVyT3JpZ2luIjoiaHR0cHM6Ly9hdHRlc3RhdGlvbi5hbmRyb2lkLmNvbSIsInN0YXRlIjo3fV0.&dt=1627621364593&bpp=9&bdt=3752&idt=1486&shv=r20210728&mjsv=m202107290101&ptt=9&saldr=aa&abxe=1&cookie=ID%3D99813c12a44e0f26-226a537bbdc900c2%3AT%3D1627165988%3AS%3DALNI_MZV4pvpUVdyUau7VC-TW2oz4QlVSw&prev_fmts=0x0&nras=1&correlator=7304095104390&frm=20&pv=1&ga_vid=1178023741.1627165989&ga_sid=1627621366&ga_hid=1218849419&ga_fc=0&u_tz=-420&u_his=1&u_java=0&u_h=1050&u_w=1680&u_ah=980&u_aw=1680&u_cd=24&u_nplug=3&u_nmime=4&adx=984&ady=1354&biw=1680&bih=900&scr_x=0&scr_y=0&eid=42530672%2C42530887%2C20211866%2C21065725%2C21067496&oid=3&pvsid=3971131340739089&pem=478&ref=https%3A%2F%2Fnews.google.com%2F&eae=0&fc=1920&brdim=0%2C23%2C0%2C23%2C1680%2C23%2C1680%2C980%2C1680%2C900&vis=1&rsz=%7C%7ClEbr%7C&abl=CS&pfx=0&fu=128&bc=31&ifi=2&uci=a!2&btvi=1&fsb=1&xpc=brlMLt3arz&p=https%3A//9to5mac.com&dtd=1497

Instead of displaying one color at a time along its entire length, the Gradient will be able to simultaneously display a number of different colors along its length.

If that sounds familiar, it’s because the company already offers the Play Gradient Lightstrip with this capability. That device is specifically designed to be mounted behind a TV set, and is available in 55″, 65″, and 75″ lengths.

In contrast, the upcoming Philips Hue Gradient Lightstrip Ambiance is a general-purpose light designed to offer the same flexibility as the original in terms of placement. It will be available in the same 2-meter lengths as the existing Lightstrip.

Hue Blog reports.

The new product, which has not yet been officially announced, is called Philips Hue Gradient Lightstrip Ambiance. The two-metre long light strip will offer a very special function: It can display several colours at the same time. Exactly how many is still unclear. In all likelihood, this will be made possible by the new dynamic scenes that Philips Hue will make available as an update this summer.

In contrast to the Play Gradient Light Strip, the new light strip will have a back with double-sided adhesive tape so that it can be mounted flexibly. As a “White and Color Ambiance” product, it will be able to display 16 million colours in addition to white tones. However, you will have to pay extra for the gradient function […]

If the length of two metres is not enough for you, the Philips Hue Gradient Lightstrip Ambiance also offers the option of connecting one or hopefully several extensions with a length of one metre each.

The site has photos of the box for the new lightstrips.

I’m a huge fan of Hue Lightstrips. We have one as accent lighting at the rear of my desk, two as under-cabinet lighting in the kitchen, and others used as wardrobe and cupboard lighting.

https://spectrum.ieee.org/will-silicon-save-quantum-computing


WILL SILICON SAVE QUANTUM COMPUTING?

Silicon has become a leading contender in the hunt for a practical, scalable quantum bit

CHEUK CHI LOJOHN J.L. MORTON 31 JUL 201419 MIN READILLUSTRATION: BRYAN CHRISTIE DESIGN

Grand engineering challenges often require an epic level of patience. That’s certainly true for quantum computing. For a good 20 years now, we’ve known that quantum computers could, in principle, be staggeringly powerful, taking just a few minutes to work out problems that would take an ordinary computer longer than the age of the universe to solve. But the effort to build such machines has barely crossed the starting line. In fact, we’re still trying to identify the best materials for the job.

Today, the leading contenders are all quite exotic: There are superconducting circuits printed from materials such as aluminum and cooled to one-hundredth of a degree above absolute zero, floating ions that are made to hover above chips and are interrogated with lasers, and atoms such as nitrogen trapped in diamond matrices.

These have been used to create modest demonstration systems that employ fewer than a dozen quantum bits to factor small numbers or simulate some of the behaviors of solid-state materials. But nowadays those exotic quantum-processing elements are facing competition from a decidedly mundane material: good old silicon.

Silicon had a fairly slow start as a potential quantum-computing material, but a flurry of recent results has transformed it into a leading contender. Last year, for example, a team based at Simon Fraser University in Burnaby, B.C., Canada, along with researchers in our group at University College London, showed that it’s possible to maintain the state of quantum bits in silicon for a record 39 minutes at room temperature and 3 hours at low temperature. These are eternities by quantum-computing standards—the longevity of other systems is often measured in milliseconds or less—and it’s exactly the kind of stability we need to begin building general-purpose quantum computers on scales large enough to outstrip the capabilities of conventional machines.

As fans of silicon, we are deeply heartened by this news. For 50 years, silicon has enabled steady, rapid progress in conventional computing. That era of steady gains may be coming to a close. But when it comes to building quantum computers, the material’s prospects are only getting brighter. Silicon may prove to have a second act that is at least as dazzling as its first.

What is a quantum computer? Simply put, it’s a system that can store and process information according to the laws of quantum mechanics. In practice, that means the basic computational components—not to mention the way they operate—differ greatly from those we associate with classical forms of computing.

Speeding Up Search

In a classical search algorithm, hunting for a particular string in an unstructured database involves looking at every entry in succession until a match is found. On average, you’d have to run through half, or N/2, of the queries before the correct entry is located. Grover’s algorithm, a quantum search algorithm named for computer scientist Lov Grover, could speed up that work by simultaneously querying all entries. The process still isn’t instantaneous: Finding the correct one would take, on average, √N  queries. But it could make a difference for large databases. To search a trillion entries, the scheme would require 0.0002 percent of the number of queries needed in the classical approach. Here’s how it works.

08SiliconQuantumgrover1

1. The input, a quantum version of the search string, is set up. It contains N different states, one of which is the index of the string you’re looking for. All N states exist in superposition with one another, much like Schrödinger’s cat, which can be both dead and alive at the same time. At this point, if the input is observed, it will collapse into any one of its N component states with a probability of 1/N (the square of the quantum state amplitude shown in the y-axis of the diagram).

08SiliconQuantumGrover2-3

2. The input is fed into the database, which has been configured to invert the phase of the correct entry. Here, phase is a quantum attribute. It can’t be directly measured, but it affects how quantum states interact with one another. The correct entry is highlighted in one step, but we can’t see it. The probability of observing the correct state is still the same as that of all the others.

3. To get around this observation problem, a quantum computer can be made to perform a simple operation that would invert all of the amplitudes of the states about their overall mean. Now, when the input is measured, it will be more likely to collapse into the correct answer. But if N is large, this probability will still be quite small.

08SiliconQuantumgrover4

4. To increase the probability of observing the correct entry, Grover’s algorithm repeats steps 2 and 3 many times. Each time, the correct state will receive a boost. After √N cycles, the probability of observing that state will be very close to 1 (or 100 percent).

For example, as bizarre as it sounds, in the quantum world an object can exist in two different states simultaneously—a phenomenon known as superposition. This means that unlike an ordinary bit, a quantum bit (or qubit) can be placed in a complex state where it is both 0 and 1 at the same time. It’s only when you measure the value of the qubit that it is forced to take on one of those two values.

When a quantum computer performs logical operations, it does so on all possible combinations of qubit states at the same time. This massively parallel approach is often cited as the reason that quantum computers would be very fast. The catch is that often you’re interested in only a subset of those calculations. Measuring the final state of a quantum machine will give you just one answer, at random, that may or may not be the desired solution. The art of writing useful quantum algorithms lies in getting the undesired answers to cancel out so that you are left with a clear solution to your problem.

The only company selling something billed as a “quantum computing” machine is the start-up D-Wave Systems, also based in Burnaby. D-Wave’s approach is a bit of a departure from what researchers typically have in mind when they talk about quantum computing, and there is active debate over the quantum-mechanical nature and the potential of its machines (more on that in a moment).

The quarry for many of us is a universal quantum computer, one capable of running any quantum or classical algorithm. Such a computer won’t be faster than classical computers across the board. But there are certain applications for which it could prove exceedingly useful. One that quickly caught the eye of intelligence agencies is the ability to factor large numbers exponentially faster than the best classical algorithms can. This would make short work of cryptographic codes that are effectively uncrackable by today’s machines. Another promising niche is simulating the behavior of quantum-mechanical systems, such as molecules, at high speed and with great fidelity. This capability could be a big boon for the development of new drugs and materials.

To build a universal quantum computer capable of running these and other quantum algorithms, the first thing you’d need is the basic computing element: the qubit. In principle, nearly any object that behaves according to the laws of quantum physics and can be placed in a superposition of states could be used to make a qubit.

Since quantum behavior is typically most evident at small scales, most natural qubits are tiny objects such as electrons, single atomic nuclei, or photons. Any property that could take on two values, such as the polarization of light or the presence or absence of an electron in a certain spot, could be used to encode quantum information. One of the more practical options is spin. Spin is a rather abstruse property: It reflects a particle’s angular momentum—even though no physical rotation is occurring—and it also reflects the direction of an object’s intrinsic magnetism. In both electrons and atomic nuclei, spin can be made to point up or down so as to represent a 1 or a 0, or it can exist in a superposition of both states.

It’s also possible to make macroscopic qubits out of artificial structures—if they can be cooled to the point where quantum behavior kicks in. One popular structure is the flux qubit, which is made of a current-carrying loop of superconducting wire. These qubits, which can measure in the micrometers, are quantum weirdness writ large: When the state of a flux qubit is in superposition, the current flows in both directions around the loop at the same time.

D-Wave uses qubits based on superconducting loops, although these qubits are wired together to make a computer that operates differently from a universal quantum computer. The company employs an approach called adiabatic quantum computing, in which qubits are set up in an initial state that then “relaxes” into an optimal configuration. Although the approach could potentially be used to speedily solve certain optimization problems, D-Wave’s computers can’t be used to implement an arbitrary algorithm. And the quantum-computing community is still actively debating the extent to which D-Wave’s hardware behaves in a quantum-mechanical fashion and whether it will be able to offer any advantage over systems using the best classical algorithms.

Although large-scale universal quantum computers are still a long way off, we are already getting a good sense of how we’d make one. There are several approaches. The most straightforward one employs a model of computation known as the gate model. It uses a series of “universal gates” to wire up groups of qubits so that they can be made to interact on demand. Unlike conventional chips with hardwired logic circuitry, these gates can be used to configure and reconfigure the relationships between qubits to create different logic operations. Some, such as XOR and NOT, may be familiar, but many won’t be, since they’re performed in a complex space where a quantum state in superposition can take on any one of a continuous range of values. But the basic flow of computation is much the same: The logic gates control how information flows, and the states of the qubits change as the program runs. The result is then read out by observing the system.

Quantum Contender

08SiliconSuperconductingqubitsUCSBPHOTO: ERIK LUCERO

08SiliconFlux100

Superconducting: Qubits can be made from a loop of a superconducting material, such as aluminum, paired with thin insulating barriers that electrons can tunnel through. There are various ways to construct a qubit in this system. One is to use the direction of current running around the loop to make a “flux qubit.” When the qubit is in a superposition of states, current flows in both directions at the same time. The start-up D-Wave Systems is making 1024-qubit systems using flux-qubit technology. But researchers have generally prioritized device development over system size; the largest systems in the laboratory have incorporated only 5 qubits. These more recent laboratory qubits are known as charge qubits and are often based on total electronic charge.

The stability of superconducting qubits has improved remarkably over the past decade, and they can be entangled with one another with good fidelity through superconducting buses. But the space required is quite large—a qubit can measure in the millimeters when the resonator needed to control it is included. Extremely low temperatures, in the tens of millikelvins, are also needed for optimal operation.Illustration: James Provost

Another, more exotic idea, called the cluster-state model, operates differently. Here, computation is performed by the act of observation alone. You begin by first “entangling” every qubit with its neighbors up front. Entanglement is a quantum-mechanical phenomenon in which two or more particles—electrons, for example—share a quantum state and measuring one particle will influence the behavior of an entangled partner. In the cluster-state approach, the program is actually run by measuring the qubits in a particular order, along particular directions. Some measurements carve out a network of qubits to define the computation, while other measurements drive the information forward through this network. The net result of all these measurements taken together gives the final answer.

For either approach to work, you must find a way to ensure that qubits stay stable long enough for you to perform your computation. By itself, that’s a pretty tall order. Quantum-mechanical states are delicate things, and they can be easily disrupted by small fluctuations in temperature or stray electromagnetic fields. This can lead to significant errors or even quash a calculation in midstream.

On top of all this, if you are to do useful calculations, you must also find a way to scale up your system to hundreds or thousands of qubits. Such scaling wouldn’t have been feasible in the mid-1990s, when the first qubits were made from trapped atoms and ions. Creating even a single qubit was a delicate operation that required elaborate methods and a roomful of equipment at high vacuum. But this has changed in the last few years; now there’s a range of quantum-computing candidates that are proving easier to scale up [see “Quantum Contenders”].

Among these, silicon-based qubits are our favorites. They can be manufactured using conventional semiconductor techniques and promise to be exceptionally stable and compact.

It turns out there are a couple of different ways to make qubits out of silicon. We’ll start with the one that took the early lead: using atoms that have been intentionally placed within silicon.

If this approach sounds familiar, it’s because the semiconductor industry already uses impurities to tune the electronic properties of silicon to make devices such as diodes and transistors. In a process called doping, an atom from a neighboring column of the periodic table is added to silicon, either lending an electron to the surrounding material (acting as a “donor”) or extracting an electron from it (acting as an “acceptor”).

Such dopants alter the overall electronic properties of silicon, but only at temperatures above –220 °C or so (50 degrees above absolute zero). Below that threshold, electrons from donor atoms no longer have enough thermal energy to resist the tug of the positively charged atoms they came from and so return.

This phenomenon, known as carrier freeze-out, describes the point at which most conventional silicon devices stop working. But in 1998, physicist Bruce Kane, now at the University of Maryland, College Park, pointed out that freeze-out could be quite useful for quantum computing. It creates a collection of electrically neutral, relatively isolated atoms that are all fixed in place—a set of naturally stable quantum systems for storing information.

Quantum Contender

08SiliconIonTrapJQIPHOTO: JOINT QUANTUM INSTITUTE

08SiliconIon100

Ion Traps: The outermost electron of an ion such as calcium can be used to create a qubit that consists of two states, which can be defined either by the electron’s orbital state or its interaction with the atom’s nucleus. Ion traps were among the earliest quantum- computing systems investigated, beginning in the 1990s. They have since been miniaturized and can be implemented on a chip with electrodes, which are used to suspend ions in midair and move them around. Ion traps have been made that can hold as many as 10 qubits at a time.

Since the ions are made to hover, qubits created in this fashion can be well isolated from stray fields and are thus quite stable. There are some disadvantages to this approach, however. The qubits must be constructed in an ultrahigh vacuum to prevent interactions with other atoms and molecules. And ion qubits must be pushed together to entangle them, which is difficult to do with high precision because of electrical noise.Illustration: James Provost

In this setup, information can be stored in two ways: It can be encoded in the spin state of the donor atom’s nucleus or of its outermost electron. The state of a particle’s spin is very sensitive to changing magnetic fields as well as interactions with nearby particles. Particularly problematic are the spins of other atomic nuclei in the vicinity, which can flip at random, scrambling the state of electron-spin qubits in the material.

But it turns out that these spins are not too much trouble for silicon. Only one of its isotopes—silicon-29—has a nucleus with nonzero spin, and it makes up only 5 percent of the atoms in naturally occurring silicon. As a result, nuclear spin flips are rare, and donor electron spins have a reasonably long lifetime by quantum standards. The spin state of the outer electron of a phosphorus donor, for example, can remain in superposition as long as 0.3 millisecond at 8 kelvins before it’s disrupted.

That’s about the bare minimum for what we’d need for a quantum computer. To compensate for the corruption of a quantum state—and to keep quantum information intact indefinitely—additional long-lived qubits dedicated to identifying and correcting errors must be incorporated for every qubit dedicated to computation. One of the most straightforward ways to do this is to add redundancy, so that each computational qubit actually consists of a group of qubits. Over time, the information in some of these will be corrupted, but the group can be periodically reset to whatever state the majority is in without disturbing this state. If there is enough redundancy and the error rate is below the threshold for “fault tolerance,” the information can be maintained long enough to perform a calculation.

If a qubit lasts for 0.3 ms on average and can be manipulated in 10 nanoseconds using microwave radiation, it means that on average 30,000 gate operations can be performed on it before the qubit state decays. Fault tolerance thresholds vary, but that’s not a very high number. It would mean that a quantum computer would spend nearly all its time correcting the states of qubits and their clones, leaving it little time to run meaningful computations. To reduce the overhead associated with error correction and create a more compact and efficient quantum computer, we must find a way to extend qubit lifetimes.

One way to do that is to use silicon that doesn’t contain any silicon-29 at all. Such silicon is hard to come by. But about 10 years ago, the Avogadro Project, an international collaboration working on the redefinition of the kilogram, happened to be making some in order to create pristine balls of silicon-28 for their measurements. Using a series of centrifuges in Russia, the team acquired silicon that was some 99.995 percent silicon-28 by number, making it one of the purest materials ever produced. A group at Princeton University obtained some of the leftover material and, in 2012, after some careful experimental work, reported donor electron spin lifetimes of more than a second at 1.8 kelvins—a world record for an electron spin in any material. This really showed silicon’s true potential and established it as a serious contender.

Our group has since shown that the spins of some donor atoms—bismuth in particular—can be tuned with an external magnetic field to certain “sweet spots” that are inherently insensitive to magnetic fluctuations. With bismuth, we found that the electron spin states can last for as long as 3 seconds in enriched silicon-28 at even higher temperatures. Crucially, we found lifetimes as high as 0.1 second in natural silicon, which means we should be able to achieve relatively long qubit lifetimes without having to seek out special batches of isotopically pure material.

These sorts of lifetimes are great for electrons, but they pale in comparison to what can be achieved with atomic nuclei. Recent measurements led by a team at Simon Fraser University have shown that the nuclear spin of phosphorus donor atoms can last as long as 3 minutes in silicon at low temperature. Because the nuclear spin interacts with the environment primarily through its electrons, this lifetime increases to 3 hours if the phosphorus’s outermost electron is removed.

Quantum Contender

08SiliconDiamondPHOTO: HANSON LAB/TU DELFT

08SiliconDiamond100

Diamond: Atomic defects in diamond have emerged as one of the leading methods for creating qubits in recent years. Such defects are what give diamonds their color (a nitrogen-doped diamond has a yellowish tint). One of the most promising qubits is a nitrogen atom that occupies a place near a vacant site within a diamond lattice. Just as in doped silicon, this defect can be used to make two different kinds of qubit. One can be constructed from the combined spin of two electrons that are attracted to the nitrogen atom. A qubit can also be made using the spin of the nucleus of the nitrogen atom.

Such diamond qubits are attractive because they interact readily with visible light, which should enable long-range communication and entanglement. The systems can stay stable enough for computation up to room temperature. One challenge researchers must tackle is the precise placement of the nitrogen atoms; this will present an obstacle to making the large arrays needed for full-scale general-purpose quantum computers. To date, researchers have demonstrated they can entangle two qubits. This has been done with two defects in the same diamond crystal and with two defects separated by as much as 3 meters.Illustration: James Provost

Nuclear spins tend to keep their quantum states longer than electron spins because they are magnetically weaker, and thus their interaction with the environment is not as strong. But this stability comes at a price, because it also makes them harder to manipulate. As a result, we expect that quantum computers built from donor atoms might use both nuclei and electrons. Easier-to-manipulate electron spins could be used for computation, and more stable nuclear spins could be deployed as memory elements, to store information in a quantum state between calculations.

The record spin lifetimes mentioned so far were based on measuring ensembles of donors all at once. But a major challenge remained: How do you manipulate and measure the state of just one donor qubit at a time, especially in the presence of thousands or millions of others in a small space? Up until just a few years ago, it wasn’t clear how this could be done. But in 2010, after a decade of intense research and development, a team led by Andrea Morello and Andrew Dzurak at the University of New South Wales, in Sydney, showed it’s possible to control and read out the spin state of a single donor atom’s electron. To do this, they placed a phosphorus donor in close proximity to a device called a metal-oxide-semiconductor single-electron transistor (SET), applied a moderate magnetic field, and lowered the temperature. An electron with spin aligned against the magnetic field has more energy than one whose spin aligns with the field, and this extra energy is enough to eject the electron from the donor atom. Because SETs are extremely sensitive to the charge state of the surrounding environment, this ionization of a dopant atom alters the current of the SET. Since then, the work has been extended to the control and readout of single nuclear spin states as well.

SETs could be one of the key building blocks we need to make functional qubits. But there are still some major obstacles to building a practical quantum computer with this approach. At the moment, an SET must operate at very low temperatures—a fraction of a degree above absolute zero—to be sensitive enough to read a qubit. And while we can use a single device to read out one qubit, we don’t yet have a detailed blueprint for scaling up to large arrays that integrate many such devices on a chip.

There is another approach to making silicon-based qubits that could prove easier to scale. This idea, which emerged from work by physicists David DiVincenzo and Daniel Loss, would make qubits from single electrons trapped inside quantum dots.

In a quantum dot, electrons can be confined so tightly that they’re forced to occupy discrete energy levels, just as they would around an atom. As in a frozen-out donor atom, the spin state of a confined electron can be used as the basis for a qubit.

The basic recipe for building such “artificial atoms” calls for creating an abrupt interface between two different materials. With the right choice of materials, electrons can be made to accumulate in the plane of the interface, where there is lower potential energy. To further restrict an electron from wandering around in the plane, metal gates placed on the surface can repel it so it’s driven to a particular spot where it doesn’t have enough energy to escape.

Large uniform arrays of silicon quantum dots should be easier to fabricate than arrays of donor qubits, because the qubits and any devices needed to connect them or read their states could be made using today’s chipmaking processes.

But this approach to building qubits isn’t quite as far along as the silicon donor work. That’s largely because when the idea for quantum-dot qubits was proposed in 1998, gallium arsenide/gallium aluminum arsenide (GaAs/GaAlAs) heterostructures were the material of choice. The electronic structure of GaAs makes it easy to confine an electron: It can be done in a device that’s about 200 nanometers wide, as opposed to 20 nm in silicon. But although GaAs qubits are easier to make, they’re far from ideal. As it happens, all isotopes of gallium and arsenic possess a nuclear spin. As a result, an electron trapped in a GaAs quantum dot must interact with hundreds of thousands of Ga and As nuclear spins. These interactions cause the spin state of the electron to quickly become scrambled.

Quantum Contender

08SiliconQuantumDotPHOTO: CHRISTIE SIMMONS/UNIVERSITY OF WISCONSIN–MADISON

08SiliconSilicon100

Silicon: There are a few options for constructing qubits with silicon. As with diamond, dopant atoms can be added to the crystal; phosphorus and arsenic are common choices. Either the spin of the dopant atom’s nucleus or that of the electrons in orbit around it can be used to construct a qubit. Similar spin qubits can also be made artificially, by using electrode and semiconductor structures to trap electrons inside quantum dots.

Using silicon that has been purified of all but one isotope has helped boost the stability of qubit systems; the material now holds the record for the longest qubit coherence times. Silicon also has an advantage when it comes to fabrication, because systems can be constructed using the tools and infrastructure already put in place by the microelectronics industry. But the small size of quantum dots and, to a greater extent, donor systems will make large-scale integration challenging. While scalable architectures exist on paper, they have yet to be demonstrated. So far, research has largely been restricted to single-dopant systems.Illustration: James Provost

Silicon, with only one isotope that carries nuclear spin, promises quantum-dot qubit lifetimes that are more than a hundred times as long as in GaAs, ultimately approaching seconds. But the material faces challenges of its own. If you model a silicon quantum dot on existing MOS transistor technology, you must trap an electron at the interface between silicon and oxide, and those interfaces have a fairly high number of flaws. These create shallow potential wells that electrons can tunnel between, adding noise to the device and trapping electrons where you don’t want them to be trapped. Even with the decades of experience gained from MOS technology development, building MOS-like quantum dots that trap precisely one electron inside has proven to be a difficult task, a feat that was demonstrated only a few years ago.

As a result, much recent success has been achieved with quantum dots that mix silicon with other materials. Silicon-germanium heterostructures, which create quantum wells by sandwiching silicon between alloys of silicon and germanium and have much lower defect densities at the interface than MOS structures, have been among the front-runners. Earlier this year, for example, a team based at the Kavli Institute of Nanoscience Delft, in the Netherlands, reported that they had made silicon-germanium dots capable of retaining quantum states for 40 microseconds. But MOS isn’t out of the running. Just a few months ago, Andrew Dzurak’s group at the University of New South Wales reported preliminary results suggesting that it had overcome issues of defects at the oxide interfaces. This allowed the group to make MOS quantum dots in isotopically pure silicon-28 with qubit lifetimes of more than a millisecond, which should be long enough for error correction to take up the slack.

As quantum-computing researchers working with silicon, we are in a unique position. We have two possible systems—donors and quantum dots—that could potentially be used to make quantum computers.

Which one will win out? Silicon donor systems—both electron and nuclear spins—have the advantage when it comes to spin lifetime. But embedded as they are in a matrix of silicon, donor atoms will be hard to connect, or entangle, in a well-controlled way, which is one of the key capabilities needed to carry out quantum computations. We might be able to place qubits fairly close together, so that the donor electrons overlap or the donor nuclei can interact magnetically. Or we could envision building a “bus” that allows microwave photons to act as couriers. It will be hard to place donor atoms precisely enough for either of these approaches to work well on large scales, although recent work by Michelle Simmons at the University of New South Wales has shown it is possible to use scanning tunneling microscope tips to place dopants on silicon surfaces with atomic precision.

Silicon quantum dots, which are built with small electrodes that span 20 to 40 nm, should be much easier to build uniformly into large arrays. We can take advantage of the same lithographic techniques used in the chip industry to fabricate the devices as well as the electrodes and other components that would be responsible for shuttling electrons around so they can interact with other qubits.

in silico qubits illustration
Illustration: Bryan Christie Design 

Given these different strengths, it’s not hard to envision a quantum computer that would use both types of qubits. Quantum dots, which would be easier to fabricate and connect, could be used to make the logic side of the machine. Once a part of the computation is completed, the electron could be nudged toward a donor electron sitting nearby to transfer the result to memory in the donor nucleus.

Of course, silicon must also compete with a range of other exciting potential quantum-computing systems. Just as today’s computers use a mix of silicon, magnetic materials, and optical fibers to compute, store, and communicate, it’s quite possible that tomorrow’s quantum computers will use a mix of very different materials.

We still have a long way to go before silicon can be considered to be on an equal footing with other quantum-computing systems. But this isn’t the first time silicon has played catch-up. After all, lead sulfide and germanium were used to make semiconducting devices before high-purity silicon and CMOS technology came along. So far, we have every reason to think that silicon will survive the next big computational leap, from the classical to the quantum age.

This article originally appeared in print as “Silicon’s Second Act.”

About the Authors

Nanoelectronics professor John J.L. Morton and research fellow Cheuk Chi Lo investigate silicon-based quantum computing at University College London. Recent worldwide advances in this area have been staggering, Morton says. “Quantum bits now live longer than the time it takes to make a cup of tea, and you can watch the telltale signal on an oscilloscope of a single quantum bit flipping back and forth,” he says. “It’s an exciting time to be working in silicon.”

https://neurosciencenews.com/sleep-productivity-19019/

Sleep Study’s Eye-Opening Findings

FeaturedNeuroscience·July 29, 2021

Summary: Sleeping longer at night had little impact on work productivity or cardiovascular health. However, taking a short daytime nap helped improve productivity and well-being overall.

Source: MIT

Subjectively, getting more sleep seems to provide big benefits: Many people find it gives them increased energy, emotional control, and an improved sense of well-being. But a new study co-authored by MIT economists complicates this picture, suggesting that more sleep, by itself, isn’t necessarily sufficient to bring about those kinds of appealing improvements.

The study is based on a distinctive field experiment of low-income workers in Chennai, India, where the researchers studied residents at home during their normal everyday routines — and managed to increase participants’ sleep by about half an hour per night, a very substantial gain. And yet, sleeping more at night did not improve people’s work productivity, earnings, financial choices, sense of well-being, or even their blood pressure. The only thing it did, apparently, was to lower the number of hours they worked.

“To our surprise, these night-sleep interventions had no positive effects whatsoever on any of the outcomes we measured,” says Frank Schilbach, an MIT economist and co-author of a new paper detailing the study’s findings.

There is more to the matter: For one thing, the researchers found, short daytime naps do help productivity and well-being. For another thing, participants tended to sleep at night in difficult circumstances, with many interruptions. The findings leave open the possibility that helping people sleep more soundly, rather than just adding to their total amount of low-grade sleep, could be useful. https://21934fc80c50befd132d12324186dd48.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

“People’s sleep quality is so low in these circumstances in Chennai that adding sleep of poor quality may not have the benefits that another half hour of sleep would have if it’s of higher quality,” Schilbach suggests.

The paper, “The Economic Consequences of Increasing Sleep Among the Urban Poor,” is published in the August issue of The Quarterly Journal of Economics. The authors of the paper are Pedro Bessone PhD ’21, a recent graduate from MIT’s Department of Economics; Gautam Rao, an associate professor of economics at Harvard University; Schilbach, who is the Gary Loveman Career Development Associate Professor of Economics at MIT; Heather Schofield, an assistant professor in the Perelman School of Medicine and the Wharton School at the University of Pennsylvania; and Mattie Toma, a PhD candidate in economics at Harvard University.

Sleeping on rickshaws

Schilbach, a development economist, says the genesis of the study came from other research he and his colleagues have done in settings such as Chennai — during which they have observed that low-income people tend to have difficult sleeping circumstances in addition to their other daily challenges. 

“In Chennai, you can see people sleeping on their rickshaws,” says Schilbach, who is also a faculty affiliate at MIT’s Abdul Latif Jameel Poverty Action Lab (J-PAL). “Often, there are four or five people sleeping in the same room where it’s loud and noisy, you see people sleep in between road segments next to a highway. It’s incredibly hot even at night, and there are lots of mosquitos. Essentially, in Chennai, you can find any potential irritant or adverse sleep factor.”https://21934fc80c50befd132d12324186dd48.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

To conduct the study, the researchers equipped Chennai residents with actigraphs, wristwatch-like devices that infer sleep states from body movements, which allowed the team to study people in their homes. Many other sleep studies observe people in lab environments.

The study examined 452 people over a month. Some people were given encouragement and tips for better sleep; others received financial incentives to sleep more. Some members of both those groups also took daytime naps, to see what effect that had.

The participants in the study were also given data-entry jobs with flexible hours while the experiment was taking place, so the researchers could monitor the effects of sleep on worker output and earnings in a granular way.

Overall, the Chennai study’s participants had been averaging about 5.5 hours of sleep per night before the intervention, and added 27 minutes of sleep per night on average. However, in order to gain those 27 minutes, the participants were in bed an extra 38 minutes per night. That speaks to the challenging sleep circumstances of the participants, who on average woke up 31 times per night.

“A key thing that stands out is that people’s sleep efficiency is low, that is, their sleep is heavily fragmented,” Schilbach says. “They have extremely few periods experiencing what’s thought to be the restorative benefits of deep sleep. … People’s sleep quantity went up due to the interventions, because they spent more time in bed, but their sleep quality was unchanged.”https://21934fc80c50befd132d12324186dd48.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

That could be why, across a wide range of metrics, people in the study experienced no positive changes after sleeping more. Indeed, as Schilbach notes, “We find one negative effect, which is on hours worked. If you spend more time in bed, then you have less time for other things in your life.”

On the other hand, study participants who were allowed to nap while on the data-entry job did fare better in several measured categories.

“In contrast to the night sleep intervention, we find clear evidence of naps improving a range of outcomes, including their productivity, their cognitive function, and their psychological well-being, as well as some evidence on savings,” Schilbach says. “These two interventions have different effects.”

That said, naps only increased total income when compared to workers who took a break instead. Naps did not increase the total income of workers — nappers were more productive per minute worked but spent less time actually working.

“It’s not the case that naps just pay for themselves,” Schilbach says. “People don’t actually stay longer in the office when they nap, presumably because they have other things to do, such as taking care of their families. If people nap for about half an hour, their hours worked falls by almost half an hour, almost a one-to-one ratio, and as a result, people’s earnings in that group are lower.”https://21934fc80c50befd132d12324186dd48.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

Valuing sleep as an end in itself

Schilbach says he hopes that other researchers will dig into some of the further questions the study raises. Further work, for instance, could attempt to change the sleeping circumstances of low-income workers to see if better sleep quality, not just increased sleep quantity, makes a difference.

Schilbach also suggests it may be important to better understand the psychological challenges the poor face when it comes to sleep.

This shows a woman taking a nap
For one thing, the researchers found, short daytime naps do help productivity and well-being. Image is in the public domain

https://21934fc80c50befd132d12324186dd48.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

“Being poor is very stressful, and that might interfere with people’s sleep,” he notes. “Addressing how environmental and psychological factors affect sleep quality is something worth examining.”

https://phys.org/news/2021-07-method-magnetizing-material-external-magnetic.html

Researchers propose a method of magnetizing a material without applying an external magnetic field

by FAPESP

Researchers propose a method of magnetizing a material without applying an external magnetic field
The study shows that the phenomenon can be produced by means of adiabatic compression, without any exchange of heat with the environment. Credit: Geek3/Wikimedia Commons – commons.wikimedia.org/wiki/File:VFPt_bar-magnet-forces.svg

Magnetizing a material without applying an external magnetic field is proposed by researchers at São Paulo State University (UNESP), Brazil, in an article published in the journal Scientific Reports, where they detail the experimental approach used to achieve this goal.

The study was part of the Ph.D. research pursued by Lucas Squillante under the supervision of Mariano de Souza, a professor at UNESP’s Department of Physics in Rio Claro. Contributions were also made by Isys Mello, another Ph.D. candidate supervised by Souza, and Antonio Seridonio, a professor at UNESP’s Department of Physics and Chemistry in Ilha Solteira. The group was supported by FAPESP.

“Very briefly put, magnetization occurs when a salt is compressed adiabatically, without exchanging heat with the external environment,” Souza told. “Compression raises the temperature of the salt and at the same time rearranges its particles’ spins. As a result, the total entropy of the system remains constant and the system remains magnetized at the end of the process.”

To help understand the phenomenon, it is worth recalling the basics of spin and entropy.

Spin is a quantum property that makes elementary particles (quarks, electrons, photons, etc.), compound particles (protons, neutrons, mesons, etc.) and even atoms and molecules behave like tiny magnets, pointing north or south—up spin and down spin—when submitted to a magnetic field.

“Paramagnetic materials like aluminum, which is a metal, are magnetized only when an external magnetic field is applied. Ferromagnetic materials, including iron, may display finite magnetization even in the absence of an applied magnetic field because they have magnetic domains,” Souza explained.

Entropy is basically a measure of accessible configurations or states of the system. The greater the number of accessible states, the greater the entropy. Austrian physicist Ludwig Boltzmann (1844-1906), using a statistical approach, associated the entropy of a system, which is a macroscopic magnitude, with the number of possible microscopic configurations that constitute its macrostate. “In the case of a paramagnetic material, entropy embodies a distribution of probabilities that describes the number of up spins or down spins in the particles it contains,” Souza said.

In the recently published study, a paramagnetic salt was compressed in a single direction. “Application of uniaxial stress reduces the volume of the salt. Because the process is conducted without any exchange of heat with the environment, compression produces an adiabatic rise in the temperature of the material. A rise in temperature means a rise in entropy. To keep total entropy in the system constant, there must be a component of local reduction in entropy that offsets the rise in temperature. As a result, the spins tend to align, leading to magnetization of the system,” Souza said.

The total entropy of the system remains constant, and adiabatic compression results in magnetization. “Experimentally, adiabatic compression is achieved when the sample is compressed for less time than is required for thermal relaxation—the typical time taken by the system to exchange heat with the environment,” Souza said.

The researchers also propose that the adiabatic rise in temperature could be used to investigate other interacting systems, such as Bose-Einstein condensates in magnetic insulators, and dipolar spin-ice systems.


Explore furtherConcepts from physics explain importance of quarantine to control spread of COVID-19


More information: Lucas Squillante et al, Elastocaloric-effect-induced adiabatic magnetization in paramagnetic salts due to the mutual interactions, Scientific Reports (2021). DOI: 10.1038/s41598-021-88778-4Journal information:Scientific ReportsProvided by FAPESP

https://www.bustle.com/wellness/reasons-you-might-be-tired-even-after-sleeping-well


Why Am I Always Tired?

Experts share big and small things that can mess with your sleep quality.

A woman yawns after waking up. Always sleepy no matter how much sleep I get? Experts share why peopl...

uniquely india/photosindia/Getty ImagesBy Jessica Booth and Jay PolishUpdated: July 28, 2021Originally Published: March 25, 2018

A good night’s sleep is supposed to leave you feeling rejuvenated, refreshed, and wide awake. But what if it doesn’t? If you’ve gotten the recommended amount of sleep, it’s extra frustrating to start to feel worn down and exhausted a few hours into the day. Figuring out why you’re so tired after sleeping well is the first step toward actually feeling like a human being after your alarm clock goes off.More like this15 Signs You Might Be A Highly Sensitive PersonBy Carolyn SteberMPs Are Calling For An Inquiry Into The Menopause & Workplace DiscriminationBy Kay LeongWhy “The Twisties” Are So DangerousBy JR Thorpe11 Strengthening Upper Body Exercises Trainers LoveBy Carolyn Steber

Typically, experts say that adults need seven to nine hours of sleep each night to get energy and stay healthy — but it is possible to get that amount of sleep every single night and still feel sleepy the next day. “Sleep is not just about quantity,” says Michael Breus, Ph.D., a sleep expert and sleep advisor for the wearable Oura. “It’s really more about quality.”

Feeling exhausted for seemingly no reason can often point to a whole bunch of other health issues, whether it’s something mental or physical. There might also be something you’re doing before bed that isn’t letting you sleep as well as you thought — you might think you just had some great shut-eye, but you may not have been as deeply asleep as it seemed. So, what could be behind all of this? Here are a 11 reasons you might be tired even after sleeping well.

1. Lack Of Movement Can Decrease Sleep Quality

A lot of people associate physical activity with exhaustion, but that’s not always the case. While an intense sweat session at the gym can help you sleep better, it’s not going to drain you of energy completely. In fact, not incorporating any physical activity in your day will make you even more tired.

“There is no question that regular exercise improves your quality of sleep,” says Dr. Vivek Cherian, M.D., a Baltimore-based internal medicine physician. “Exercising moderately has been shown to increase the amount of deep sleep individuals get.” In other words, taking that spin class at the crack of dawn might be exhausting to get to — but it can make you sleep better long-term.

2. Dehydration Can Hurt Sleep

Being dehydrated can do more than just make you feel light-headed and dizzy — it can also make you feel really, really tired. Being dehydrated messes with your blood volume, which can make your heart less efficient, leading to exhaustion all the time. “Going to bed dehydrated puts you at risk for leg cramps, limb movements, and larger movements during sleep which lead to awakenings,” Breus explains.

3. Depression Can Cause Sleep Issues

One of the most common symptoms of depression is exhaustion. Experiencing depression can leave you feeling tired all the time, no matter how much sleep you get — people often don’t realize they’re depressed until they realize how sleepy they are. “Depression certainly can be associated with sleep problems (both in terms of sleeping long periods of time and excessive daytime sleepiness, or conversely individuals with depression may have difficulty falling asleep),” Dr. Cherian tells Bustle. Either way, he explains, depression can negatively impact a person’s sleep cycle, leaving them more tired than they would expect upon waking from a long slumber.

4. Alcohol Can Interrupt Your Sleep Cycle

A woman stretches after waking up at home. Drinking alcohol before bed can make you tired after slee...
South_agency/E+/Getty Images

If you’ve ever unwound from a long day at work with a glass — or two — of wine, you might have found yourself conking out easier than expected. “Short-term alcohol can have a sedative effect and actually help induce sleep,” Dr. Cherian says. But if a person consistently turns to a glass of the good stuff to get them to sleep, Dr. Cherian explains that it might have the opposite impact. “Individuals who excessively consume alcohol often have difficulty sleeping or insomnia.”

5. Coffee Can Affect Your Sleep Rhythms

If you’re drinking coffee as late as six hours before your bedtime, that’s affecting your sleep — even if you don’t realize it. Coffee is meant to keep you awake and energized, but too much of it too late in the day will backfire. Breus suggests cutting off your coffee consumption past 2 p.m. if your afternoon pick-me-up is picking you up too long into the night.

6. Nighttime Phone Use Can Hurt Sleep

Re-runs of The Gilmore Girls might be your fave thing to drift off to, but if you’re doing a lot of staring at a screen at night, it might negatively impact your sleep. Blue screens like the ones on smartphones can trigger a “wake-up” hormone even when you’re about to sleep for the night. Again, you might not realize it’s messing with your rest, but it could be keeping you from getting a deep enough sleep and leave you tired the next day. “There is mounting evidence that blue light actually suppresses the secretion of melatonin, which is a hormone that influences your circadian rhythm,” Dr. Cherian explains.

It’s not just the blue light you’ve got to be wary of, though. “The real issue with screens at night is the engagement in the activity that really is going in the opposite direction of sleep,” Breus tells Bustle. “If you are trying to get your high score on Candy Crush, you are really not trying to go to bed.”

7. When You Eat Can Impact Sleep

Skipping your breakfast won’t just leave you at risk of being hangry in the morning — it might also prevent you from getting a good sleep that night. According to a 2017 study published in the journal Current Biology, eating each day on a relatively consistent schedule can help your body regulate your circadian rhythms. In other words, eating your Wheaties can help you sleep better each night.

8. Nutritional Deficiencies Can Interrupt Sleep

A woman takes a nap on the couch after sleeping badly. Asking "Why am I always tired, even though I ...
Rowan Jordan/E+/Getty Images

Having all your nutrients might be as important as curling into a comfy mattress for getting a good night’s sleep. Iron deficiencies can negatively impact sleep quality, according to a 2015 study published in the journal African Health Sciences. Being deficient in vitamin B can also make you extra tired even after sleeping, since that vitamin is responsible for helping convert food into energy. Magnesium deficiencies can also knock a person’s blood glucose levels out of whack and leave you feeling lethargic. Your doctor can test for nutrient deficiencies and prescribe a treatment plan.

9. Anxiety Can Hurt Sleep Quality

Stress and anxiety can go hand in hand in ensuring you’ll feel less energetic and more lethargic, no matter how much sleep you get. “Individuals with anxiety often have difficulty falling asleep (insomnia) and tend to have more sleeping issues when going through stressful situations,” Dr. Cherian explains. Even when someone with anxiety does get a solid amount of shut up, being anxious can make sleep more restless, causing you to wake up more and not fall into the deep sleep you need.

10. Hormone Disorders Can Mess With Sleep

When you can’t identify the reason for your exhaustion even after consistent sleep, Dr. Cherian suggests checking in with your doctor. Diabetesthyroid disorders, and anemia can all cause sleep issues and exhaustion. Anemia can also make people feel weak and short of breath, and is typically caused by an iron deficiency, blood loss, or even something like cancer or kidney failure. Meanwhile, one major sign of both thyroid disease and diabetes is exhaustion.

11. Sleep Disorders Can Cause Sleep Issues

“You can never be certain, but there may be some clues that you may have a sleep disorder if you feel you’re getting a good night’s rest but always feeling tired,” Dr. Cherian explains. He suggests keeping track of when it is that you’re feeling tuckered out — it might be a sleep disorder, but it also might just mean you’re not a morning person.

“It’s not uncommon to wake up feeling disoriented or drowsy,” Dr. Cherian says. “Sleep inertia is a term used to describe this is actually a normal part of the process of waking up. It typically resolves after a few minutes but also can last up to an hour.” But if your sleep inertia is lasting much longer than normal — if you’re sluggish way longer than expected every morning — Dr. Cherian suggests checking in with your primary care physician.

Experts:

Michael Breus, Ph.D., sleep expert, sleep advisor for Oura

Dr. Vivek Cherian, M.D., Baltimore-based internal medicine physician

Studies Referenced:

Wehrens, S.M.T. (2017) Meal timing regulates the human circadian system. Current Biology, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5483233/.

Murat, S. (2015) Assessment of subjective sleep quality in iron deficiency anaemia. African Health Sciences, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4480468/.

This article was originally published on March 25, 2018

https://www.analyticsinsight.net/brain-computer-interface-transforming-mental-handwriting-to-text-on-screen/

BRAIN COMPUTER INTERFACE: TRANSFORMING MENTAL HANDWRITING TO TEXT ON SCREEN

LATEST NEWSTECH NEWSby Madhurjya ChowdhuryJuly 30, 2021

Brain Computer Interface

Next-Gen Brain Interface Computer Changes the Writing Mechanism

Brain-computer interfaces monitor brain activity, extract information from it, and transform those data into outputs that replace, repair, augment, supplement, or improve human activities.

BCIs may be used to replace lost functions such as speech and movement. They may regain control of the body by activating neurons or muscles that move the hand, for example. BCIs have also been used to improve capabilities, such as training individuals to increase the functionality of impaired gripping pathways. BCIs can also improve function, such as alerting a drowsy driver to get out of the car. Finally, a BCI may be used to complement the body’s normal outputs, such as a third hand.

BCIs employ a variety of ways to monitor brain activity. Electrical signals are monitored using electrodes implanted invasively within or on the surface of the cortex, or noninvasively on the surface of the head, in the majority of BCIs. Some BCIs are based on non-invasive metabolic activity measurements, such as functional magnetic resonance imaging (fMRI).

Mental Handwriting to Text on Screen

Scientists are looking at a variety of methods enabling people with impairments to communicate with their minds. The newest and quickest reverts to a time-honored method of self-expression: handwriting.

Researchers have identified the brain activity involved with trying to create letters by hand for the first time. The researchers utilized an algorithm to recognize letters as he tried to write them while working with a paralyzed volunteer who had sensors implanted in his brain. The text was then shown on a screen in real-time by the system.

According to study co-author Krishna Shenoy, a Howard Hughes Medical Institute Investigator at Stanford University who co-supervised the work with Jaimie Henderson, a Stanford neurosurgeon, the technology could allow people with paralysis to type quickly without using their fingers with further development.

According to Shenoy and his colleagues, the research participant wrote 90 characters per minute while trying handwriting, which is more than quadruple the previous benchmark for typing with such a “brain-computer interface.”

According to Jose Carmena, a brain engineer at the University of California, Berkeley, who was not involved in the research, this technique and others like it have the potential to benefit people with a wide range of impairments. “It’s a significant advancement in the field,” he adds, despite the preliminary findings.

According to Carmena, brain-computer connections turn thinking into action. “This document is a wonderful example: the interface decodes the written concept and generates the action.”

Communication That is Fuelled by Thought

When a person’s capacity to move is taken away due to an accident or sickness, the brain’s neuronal activity for walking, getting a cup of coffee or uttering a phrase remains. Researchers can use this exercise to aid patients who have lost abilities due to paralysis or amputation.

The need varies depending on the type of impairment. Some persons who have injured their hands can still use a PC with speech recognition software and other programs. Other techniques to assist people to communicate have been developed by scientists for persons who have trouble speaking.

Shenoy’s team has spent the last several years decoding the brain activity involved with a speech with the aim of replicating it. They’ve also created a method for individuals with implanted sensors to move a pointer on a screen by using their thoughts linked with attempted arm motions. People could type around 40 characters per minute by pointing at and pressing on letters in this fashion, breaking the previous speed record for writing with a brain-computer interface (BCI).

No one, on the other hand, had glanced at the handwriting. Frank Willett, a neuroscientist in Shenoy’s lab, wondered if the brain impulses elicited by placing pen to paper might be harnessed. “We want to discover new ways for individuals to communicate more quickly,” he says. He was also enticed by the prospect of trying something new.

The researchers worked with a participant in the BrainGate2 clinical study, which is evaluating the safety of BCIs that transmit data directly from the brain to a computer. (Leigh Hochberg, a neurosurgeon, and neuroscientist at Massachusetts General Hospital, Brown University, and the Providence VA Medical Centre, is the trial’s director.) Henderson inserted two small sensors into the region of the brain that regulates the hand and arm, allowing the individual to operate a robotic arm or a cursor on a computer screen by moving their own paralyzed arm.

The subject, who was 65 years old at the time of the study, was paralyzed from the neck down due to a spinal cord injury. A machine learning system detected the patterns his brain created with each letter using data picked up by the sensors from sensory cells as the guy envisioned writing. The guy could replicate phrases and answer questions at a rate comparable to someone his age writing on a smartphone using this technique.

According to Willett, this so-called “Brain-to-Text” BCI is so quick because each letter evokes a different activity pattern, making it incredibly straightforward for the algorithm to identify one from another.

A New System has Been Implemented

Shenoy’s team plans to employ attempted handwriting for text input as part of a larger system that incorporates point-and-click navigation similar to that seen on today’s smartphones, as well as attempted voice decoding. He explains, “Having those two or three modes and flipping between them is what we do naturally.”

The team will then work with a person who is unable to talk, such as somebody with amyotrophic lateral sclerosis, a degenerative brain condition that causes loss of mobility and voice, according to Shenoy.

Henderson says that the new method might benefit those who are paralyzed due to a variety of illnesses. Among them is Jean-Dominique Bauby, the writer of The Diving Bell and the Butterfly, who suffered from a brain stem stroke. Henderson writes, “He was able to create this emotional and beautiful novel by meticulously picking characters one at a time, utilizing eye movement.” “Try imagining what he’d have done with Frank’s writing interface!” exclaims the narrator.

Conclusion

In patients with chronic paralysis or in the LIS, BCI has proven to be effective for communication. BCI enables users to convey their purpose directly without the need for a motor periphery. Because of the introduction of non-visual BCI, patients who have lost control of their eye movement effect on disease progression or damage can now benefit from this technology. Normative studies are needed to establish the predictive usefulness of ERP or mental imagery categorization for BCI usage before the suggested hierarchical methodology to cognitive processing may be used for DOC patients.