https://elemental.medium.com/cant-sleep-try-quiet-wakefulness-instead-2b106e5b8e3c

Can’t Sleep? Try ‘Quiet Wakefulness’ Instead

Stop trying so hard to nap. Resting could have similar benefits.

Cassie Shortsleeve
Dec 27 · 5 min read

Photo: Luis Alvarez/Getty Images

WWhen professional sports organizations are looking to build a nap room for players, one of the first things that sleep specialist W. Christopher Winter, M.D., tells them is: Don’t call it that.

“We try to get teams to call these rooms something that doesn’t have ‘sleep’ or ‘nap’ in the title — the ‘restoration room’ or the ‘regeneration room,’ for example,” explains Winter, who consults with the MLB, NHL, and NBA.

The reason: To take away that implied, it’s-time-to-sleep pressure where your experience is considered successful if you sleep and a failure if you don’t.

The other reason: It introduces the idea of a powerful resting activity called “quiet wakefulness,” which is gaining traction among sleep doctors and busy-but-health-conscious circles.

What exactly is quiet wakefulness?

In short, it’s simply resting with your eyes closed. It’s compelling, in part, because it completely eliminates the stress surrounding sleep — particularly that I can’t fall asleep right now so my health is going to fall apart feeling that keeps you awake.

Stress and naps are a common yet unfortunate pairing, Dr. Winter explains. Many people can work themselves up so much about falling asleep that they struggle to actually do it.

But while you might not be able to fully control exactly when you fall asleep, you can control when you rest — and that’s one of quiet wakefulness’ biggest benefits.

Of course, that’s normal. “Most people don’t have complete control over their sleep,” Dr. Winter acknowledges. It would be strange, he says, to meet somebody who says, I have never had any trouble sleeping whatsoever. Having occasional sleep problems is to be expected.

But while you might not be able to fully control exactly when you fall asleep, you can control when you rest — and that’s one of quiet wakefulness’ biggest benefits.

The boons of rest are bigger than executing control over your time. The National Sleep Foundation notes that quiet wakefulness can give brain cells, muscles, and organs a break, reducing stress and improving mood, alertness, creativity, and more.

Some studies even suggest a slight drop in reaction time after a nap versus after a rest period because of the sleep inertia (a.k.a. grogginess after waking up) that sleep itself, but not rest, can cause.

During quiet wakefulness, when the brain is not actively engaged in responding to the outside world, some of the brain electrical activity is similar to what you’d see during sleep, explains Dr. Ritchie Edward Brown, a research health scientist at VA Boston Healthcare System and an associate professor of psychiatry at Harvard Medical School who studies brain physiology and the sleep-wake cycle.

“Once you know that you can feel more rested whether you sleep or not, that feel-good feeling can feed off of itself.”

Research also suggests that there may be similar benefits between sleep and rest in terms of how you process information you’ve been exposed to or how you try to find solutions to problems. One Cell Reports study of rats found that during quiet wakefulness, rats replayed and contextualized past events in order to inform their future rat choices. How do scientists coerce a rat into quiet wakefulness, you may ask? They don’t. Quiet wakefulness in rats means they are sitting or lying in one place, grooming themselves, or just looking around, which is something the animals do naturally. Scientists just look at the electrical brain activity when they are in this state compared to when the rats are active and running around.

Another small study out of the University of California, San Diego found that people who napped and those who simply rested performed the same on a visual test where they had to find a “T” image on a screen, suggesting that for some cognitive tasks, the benefits of resting are equal to those of actually sleeping.

Ultimately, though, resting quietly with your eyes closed can leave you feeling surprisingly refreshed, says Dr. Winter. And that can help you seek out more quiet moments. “Once you know that you can feel more rested whether you sleep or not, that feel-good feeling can feed off of itself,” he says.

Sleep still reigns

Quiet wakefulness has its weaknesses. For one, it’s unlikely that quiet wakefulness is close to actual sleep in terms of its true restorative benefits, says Brown. “If this was true, then there wouldn’t be such a strong drive to sleep when we stay awake for a long time and sleep deprivation wouldn’t be so harmful.” Anyone who’s had a baby, worked an overnight shift, or simply pulled an all-nighter knows all too well the emotional and physical tolls that come with little to no actual shuteye.

What’s more, the brain uses about 40% less energy during sleep versus when you’re awake, and levels of wakefulness-promoting neurotransmitters such as histamine and norepinephrine are higher during quiet wakefulness than sleep. “There is greatly enhanced clearance of toxic proteins during sleep compared to wakefulness,” says Brown.

In short, deep stages of sleep are key for helping you process emotions, remember new information, and repair cells (basically everything you need to keep functioning like a high-functioning adult). During these stages, the brain produces slow brain waves called delta waves, which are only seen during sleep, says Dr. Winter. Ensuring that you’re getting the recommended seven to nine hours of sleep a night is key.

But in the midst of our busy schedules and stressed out lives, quiet wakefulness can infuse much-needed moments of calm and provide some health benefits, to boot. If you want to give it a try, consider these two jumping-off points.

1. Learn to meditate deeply

You can take being relaxed and quiet to another level through meditation. Some early studies that monitored people’s electrical brain activity during deep meditation suggested that people were able to reach a near–sleep-like state while awake but meditating. But the two aren’t exactly equal. During meditation, your brain is likely not creating delta waves but alpha waves — a type of brain wave linked with relaxation, an uptick in creativity, a decrease in depressive symptoms, and, as shown in research on Tibetan Buddhist monks, an increase in long-lasting brain function.

Meditation also leads to an increase in beta waves (linked with focus) and gamma waves (linked with processing information from different brain areas). Hone your meditation skills with one of many apps (Headspace or Calm, for example) or by taking an in-person class.

2. Change the way you talk and think about sleep

Just as Dr. Winter advises his sports team clients, you should change the way you talk about sleep.

Instead of putting your child down for a nap, put them down for “quiet time.” Instead of taking a nap yourself, close your eyes, turn the lights out, set an alarm for 20 minutes, and just rest.

Simply enjoying being awake in bed has restorative benefits. “If you truly think that either you’re going to get into bed and fall asleep or be awake and that’s fine then, either way, it’s a win,” Dr. Winter says. “This kind of transforms the act of sleeping in general.”

Plus, dropping the stress surrounding falling asleep can actually help you fall asleep in the first place; it’s usually Dr. Winter’s go-to tip for overcoming sleep issues such as insomnia.

https://singularityhub.com/2019/12/28/these-were-singularity-hubs-top-10-articles-in-2019/

These Were Singularity Hub’s Top 10 Articles in 2019

Most Saturdays we post a curated collection of notable news and awesome articles from the week. But with the year nearing its end, this Saturday and next we’ll curate 2019 as a whole. First, in this post, we’ll take a look at the year’s top articles from Singularity Hub, and next week we’ll post some of our favorite writing from around the web.

The year was a bit of a rollercoaster. We got the Impossible Whopper, an advanced robot dog called Spot, a “word processor” for gene editing, and the first image of a black hole. We also marked the dubious anniversary of the first genetically modified babies, scientists called for a global moratorium on germline engineering, and big tech continued to face a backlash from within and without. Machine learning algorithms beat top players in multiplayer video games, and a former world champion in the game of Go retired, saying AI cannot be defeated. Meanwhile, prominent AI researchers suggested deep learning is fast approaching its limits.

The most popular articles on Singularity Hub looked ahead to the future of work and the end of Moore’s Law (and what’s coming next), surveyed the augmented reality and virtual reality landscape, and covered quick progress in neuroscience, biotech, and medicine.

AI Will Create Millions More Jobs Than It Will Destroy. Here’s How
Byron Reese
“Some fear that as AI improves, it will supplant workers, creating an ever-growing pool of unemployable humans who cannot compete economically with machines. This concern, while understandable, is unfounded. In fact, AI will be the greatest job engine the world has ever seen.”

5 Discoveries That Made 2018 a Huge Year for Neuroscience
Shelly Fan
“2018 was when neuroscience made the impossible possible. …Here are five neuroscience findings from 2018 that still blow our minds as we kick off the new year.” [Note: Be sure to check out this year’s list too—2019 was another fascinating year for brain science.]

Wait, What? The First Human-Monkey Hybrid Embryo Was Just Created in China
Shelly Fan
“The morality and ethics of growing human-animal hybrids are far from clear. …What is clear, however, is that when it comes to human-animal chimeras, lines are being set, pushed, crossed, and crossed again.”

The World’s Most Valuable AI Companies, and What They’re Working On
Peter Rejcek
“…the startups working on many of these AI technologies have seen their proverbial stock rise. More than 30 of these companies are now valued at over a billion dollars, according to data research firm CB Insights, which itself employs algorithms to provide insights into the tech business world.”

5 Breakthroughs Coming Soon in Augmented and Virtual Reality
Peter Diamandis, MD
“After creating the virtual civilization Second Life in 2013, now populated by almost 1 million active users, Philip [Rosedale] went on to co-found High Fidelity, which explores the future of next-generation shared VR. In just the next five years, he predicts five emerging trends will take hold, together disrupting major players and birthing new ones.”

How Three People With HIV Became Virus-Free Without HIV Drugs
Shelly Fan
“Dubbed the ‘Berlin Patient,’ Timothy Ray Brown, an HIV-positive cancer patient, received a total blood stem cell transplant to treat his aggressive blood cancer back in 2008. He came out of the surgery not just free of cancer—but also free of HIV. Now, two new cases suggest Brown isn’t a medical unicorn. …Does this mean a cure for HIV is in sight? Here’s what you need to know.”

The Origin of Consciousness in the Brain Is About to Be Tested
Shelly Fan
“Here’s something you don’t hear every day: two theories of consciousness are about to face off in the scientific fight of the century. …The ‘outlandish’ project is already raising eyebrows…[but] even if [it] can somewhat narrow down divergent theories of consciousness, we’re on our way to cracking one of the most enigmatic properties of the human brain.”

The Age of Solar Energy Abundance Is Coming in Hot
Peter Diamandis, MD
“As the price-performance ratio of solar technologies begins to undercut traditional energy sources, we will soon witness the mass integration of solar cells into everyday infrastructure, meeting energy demands across the globe.”

Moore’s Law Is Dying. This Brain-Inspired Analogue Chip Is a Glimpse of What’s Next
Shelly Fan
“This week, a team from Pennsylvania State University designed a 2D device that operates like neurons. Rather than processing yes or no, the ‘Gaussian synapse’ thrives on probabilities. Similar to the brain, the analogue chip is far more energy-efficient and produces less heat than current silicon chips, making it an ideal candidate for scaling up systems.”

New Progress in the Biggest Challenge With 3D Printed Organs
Edd Gent
“We’re tantalizingly close to growing organs in the lab, but the biggest remaining challenge has been creating the fine networks of blood vessels required to keep them alive. Now researchers have shown that a common food dye could solve the problem.”

https://www.huffingtonpost.ca/entry/sleep-extra-hour_ca_5df7a5f0e4b03aed50f21431

Getting An Extra Hour Of Sleep A Day Can Make You More Emotionally Intelligent

Sticking to a sleep schedule can be invaluable.

 

Today’s habit: Get an extra hour of sleep.

For whenever you’re feeling: Constantly tired; sluggish; but also, maybe you just love sleep and want to do it more.

 

What it is: These are the people who have told me to get more sleep: my doctor, my mom, Instagram “influencers,” bloggers, health experts in the media, and my husband. Before I had a baby, I used to get a lot of it. Now I get a lot less and I’m constantly tired. But that’s parenting for ya.

If I could, I would love to get eight to nine hours of sleep a night, but that’s not possible, right? Yes, I do have to get up earlier to accommodate my busy schedule, but I don’t have to stay up until midnight reading a book, or watching a dozen ASMR videos.

So, in an effort to be healthier, I’m going to try to get an extra hour of sleep at night by going to bed earlier.

 

 

Easy Tips To Help Lift Your Mood

How it can help: The amount of sleep you need in a 24-hour cycle depends on the person (and their age), so the National Sleep Foundation has created a rule-of-thumb chart that shows the average number of hours of sleep you should get, depending on your age. For example, if you’re between the ages of 26 to 64, you should be getting seven to nine hours of sleep a day.

 

Even if you feel like you’re getting a decent amount of sleep, experts say that most teens and adults are sleep deprived (30 per cent of adults get fewer than six hours a night). So, an extra hour of zzz’s can make a huge difference in how you feel, both physically and mentally.

 

You’re going to feel better, you’ll have more energy, you’ll have better ideas … your mood’s going to be better,” Rachel Salas, an associate professor of neurology who specializes in sleep medicine and sleep disorders at Johns Hopkins University, told BBC News.

 

Getting a good night’s sleep will help you concentrate better, be more productive, give you more energy and speed, lower the risk of heart disease, prevent depression, and make you more emotionally intelligent.

How to get started: The National Sleep Foundation has a few helpful tips on how to get a better night’s sleep, including:

  • Stick to a sleep schedule
  • Practice a relaxing nighttime ritual
  • Get regular exercise
  • Turn off your devices (phones, laptops, e-readers, iPads) before you go to bed
  • Make sure the temperature levels are comfortable
  • Don’t drink alcohol or caffeine before bed
  • Sleep on a comfortable mattress and pillow

Personally, because I know that my phone is keeping me up later, I turn it off and put it on my bookshelf across from my bed so I can’t reach for it. This forces me to turn off the lights earlier because now all I can do is read for a few minutes rather than watch YouTube videos for an hour or two.

 

So, if you can pinpoint the one thing that is preventing you from going to bed earlier, like a late dinner, a TV show, the kids’ bedtime routine, or checking work emails, find a way to deal with it earlier so it doesn’t keep you up late.

 

If you have insomnia (trouble falling asleep and/or staying asleep), talk to your doctor about how you can tackle it. Maybe anxiety is keeping you awake, and if that’s the case, learning relaxation and meditation techniques is a good start to help calm your mind when you’re trying to fall asleep.

 

How it makes us feel: Getting eight hours of sleep a day as opposed to my usual six hours makes me feel like a brand new person. I feel like I can tackle anything, whether it’s my baby’s tantrums, a work deadline, or making dinner.

 

And generally, I’m in a much better mood and feel more content during the day.

Getting An Extra Hour Of Sleep A Day Can Make You More Emotionally Intelligent

Sticking to a sleep schedule can be invaluable.

 

https://www.scmp.com/lifestyle/gadgets/article/3043757/start-takes-three-years-send-my-smart-glasses-however-much-i-them

Start-up takes three years to send my smart glasses. However much I like them – and I do – that’s not a good look

  • Our consumer tech editor was among the 10,410 people who crowdfunded San Francisco start-up Vue’s smart glasses in 2016. He’s finally received his pair
  • The prescription is spot on despite the wait, and the glasses really are smart. Yet for all Vue’s transparency about delays, the experience is a disappointment
A pair of Vue Smart glasses. Customers who backed start-up Vue on crowdfunding platform Kickstarter in 2016 are finally receiving their orders. The glasses live up to their billing, but the charging system is poorly designed. Photo: Antony Dickson
A pair of Vue Smart glasses. Customers who backed start-up Vue on crowdfunding platform Kickstarter in 2016 are finally receiving their orders. The glasses live up to their billing, but the charging system is poorly designed. Photo: Antony Dickson

They finally arrived. The pair of smart glasses I bought from Vue, a San Francisco-based tech start-up, back in December 2016 landed on my desk at the end of last month– almost three years after I placed the order. Had it not been for the occasional “project updates” I received through email, I would have completely forgotten about this US$204 purchase.

The Kickstarter campaign for the project raised US$2.2 million from 10,410 backers, but the production of these glasses in Shenzhen, southern China was plagued by technical issues.

A June 2017 update I received said: “Progress is great, but we also want to make sure to give backers insight into the true complexity of manufacturing a product, and that includes talking about headaches.” Of which there were plenty, evidently. By that point, I knew I would not be getting my Vue Smart Glasses in July 2017 as pledged.

But at least I eventually got mine. Looking through some of the 5,000-plus comments on Vue’s website, it looks like some of the backers are still waiting for theirs.

A pair of Vue Smart glasses and charging case. Photo: Antony Dickson
A pair of Vue Smart glasses and charging case. Photo: Antony Dickson

So, has it been worth the wait?

I’m very impressed with the quality of the protective case, which also serves as the charging dock for the glasses. It feels very sturdy and snap shuts firmly when closed. Though slightly on the bulky side, the dark green case is fine to carry around. On its right side is a USB (Type-C) port, which lights up when plugged into a power source.

The glasses, too, are of decent build. The black frame is solid with tight hinges, but since the arms are not connected to the hinges by screws, they may loosen over time. There are no nose pads, but the arms press onto both sides of the head quite tightly, and this pressure stops them sliding down your nose – even if your nose is of the flatter kind like mine.

The frame is quite thick. Whether it looks stylish is a matter of opinion. Some may say the glasses look artsy, others nerdy.

The charging connection on a pair of Vue Smart glasses. Photo: Antony Dickson
The charging connection on a pair of Vue Smart glasses. Photo: Antony Dickson

They are charged inside the protective case through a tiny dual-pin connector, so they need to be folded in a certain way to ensure the right temple tip is touching the pins. Proper connection is indicated by a pulsing white LED light. Since the pins are so small, more often than not a connection is not made. This is just poor design.

What impresses me most is that the lens exactly match my prescription – I am extremely myopic – even with the three-year delay. I am able to see clearly with the glasses and can wear them for a long period of time without getting a headache (which was a primary concern).

But I didn’t spend US$204 for just a normal pair of glasses. Vue glasses are supposed to be smart – meaning I can use them to answer phone calls, listen to music, check how many steps I have walked and how many calories I have burned. Pairing them to my mobile phone was straightforward, the Vue app is easy to use and everything works pretty much as advertised.

Vue Smart glasses do everything Vue said they would do, but they may not be “the future of mainstream wearables” some envisaged in 2016. Photo: Antony Dickson
Vue Smart glasses do everything Vue said they would do, but they may not be “the future of mainstream wearables” some envisaged in 2016. Photo: Antony Dickson

Using the bone conduction audio technology – sound is transmitted directly to the inner ear through vibration on the bones of the skull – I am able to listen to music and take calls when I have the glasses on. But I need to be in a very quiet environment to actually hear anything. While the sound quality is good, the volume is not.

Having said that, if the sound was any louder that tingling sensation on the back of my ears from the vibration would probably feel like a bad itch.

Vue Smart Glasses were announced around the same time Snap released its trendsetting Spectacles and were considered by the tech media to be “the future of mainstream wearables”.

But that was 2016.

Despite Vue’s transparency throughout the campaign, my first experience backing a start-up project has been a real disappointment. Technology – and myopia – can advance so much in just a few months, let alone three years.

https://newatlas.com/quantum-computing/quantum-teleportation-computer-chips/

Information teleported between two computer chips for the first time

Researchers have managed to quantum teleport information between two computer chips for the first time
Researchers have managed to quantum teleport information between two computer chips for the first time

Scientists at the University of Bristol and the Technical University of Denmark have achieved quantum teleportation between two computer chips for the first time. The team managed to send information from one chip to another instantly without them being physically or electronically connected, in a feat that opens the door for quantum computers and quantum internet.

This kind of teleportation is made possible by a phenomenon called quantum entanglement, where two particles become so entwined with each other that they can “communicate” over long distances. Changing the properties of one particle will cause the other to instantly change too, no matter how much space separates the two of them. In essence, information is being teleported between them.

Hypothetically, there’s no limit to the distance over which quantum teleportation can operate – and that raises some strange implications that puzzled even Einstein himself. Our current understanding of physics says that nothing can travel faster than the speed of light, and yet, with quantum teleportation, information appears to break that speed limit. Einstein dubbed it “spooky action at a distance.”

Harnessing this phenomenon could clearly be beneficial, and the new study helps bring that closer to reality. The team generated pairs of entangled photons on the chips, and then made a quantum measurement of one. This observation changes the state of the photon, and those changes are then instantly applied to the partner photon in the other chip.

“We were able to demonstrate a high-quality entanglement link across two chips in the lab, where photons on either chip share a single quantum state,” says Dan Llewellyn, co-author of the study. “Each chip was then fully programmed to perform a range of demonstrations which utilize the entanglement. The flagship demonstration was a two-chip teleportation experiment, whereby the individual quantum state of a particle is transmitted across the two chips after a quantum measurement is performed. This measurement utilizes the strange behavior of quantum physics, which simultaneously collapses the entanglement link and transfers the particle state to another particle already on the receiver chip.”

The team reported a teleportation success rate of 91 percent, and managed to perform some other functions that will be important for quantum computing. That includes entanglement swapping (where states can be passed between particles that have never directly interacted via a mediator), and entangling as many as four photons together.

Information has been teleported over much longer distances before – first across a room, then 25 km (15.5 mi), then 100 km (62 mi), and eventually over 1,200 km (746 mi) via satellite. It’s also been done between different parts of a single computer chip before, but teleporting between two different chips is a major breakthrough for quantum computing.

The research was published in the journal Nature Physics.

Source: University of Bristol

https://www.tomshardware.com/features/top-raspberry-pi-projects-2019

Top 10 Raspberry Pi Projects of 2019

Raspberry Pi Project With LCD Screen

(Image credit: Shutterstock)

This year has been an incredible year for the Raspberry Pi community. Not only did we get the Raspberry Pi 4, but we also saw tons of new projects, ideas and innovations from the worldwide Pi community.

Many creators are pushing the boundaries of what we can build in hopes of creating a better, smarter, or even more whimsical future. Today we’re looking at the best of the best. These are our picks for the top Raspberry Pi projects of 2019, in no particular order.

If you want a taste of what the community has been busy tinkering with this year, this is the list for you. And don’t worry—we listed the source code on projects where it’s available so you can create them yourself.

Telepresence Hand for Hazardous Areas

Telepresence Hand

(Image credit: Andrew Loeliger)

Engineering student Andrew Loeliger watched Pacific Rim and did what nobody else did—created a robotic hand that can be controlled remotely using a Raspberry Pi.

The robotic hand is operated by a user who wears a glove controller. Moving the glove will cause the robotic hand to move. A Raspberry Pi Zero transmits input information from the glove to a servo driver board on the robotic hand. Individual finger movement is controlled by a pulley system constructed of fishing line attached to a servo motor. The entire project cost about £300, Loeliger told us.

Loeliger is working to add more features, including haptic feedback in the glove that would allow the user to feel what the robotic hand is touching. He hopes the concept can be used to help first responders in hazardous situations. We could see a number of use cases, including helping disabled folks, enabling scientists to handle dangerous materials or shaking hands with someone in a remote meeting (don’t squeeze too hard).

Cheeseborg – Raspberry Pi Grilled Cheese Maker

(Image credit: Taylor Tabb)

This project was created by a group of friends and mechanical engineering students at Carnegie Mellon University—Taylor Tabb, Mitchell Riek, and Evan Hill. Together, they created a Raspberry Pi-powered grilled cheese making machine known as the Cheeseborg.

We reached out to the team and received confirmation from Taylor—the grilled cheese sandwiches made by the Cheeseborg are definitely tasty. They even spent 3 weeks researching what people like most in a grilled cheese sandwich.

The machine is voice activated, using Google Assistant on a Raspberry Pi to trigger the grilled cheese making process. The Cheeseborg handles everything from assembly to cooking before delivering the sandwich into a pocket on the side of the machine. Read more about the project and follow its progress on Taylor Tabb’s website.

Raspberry Pi-Powered Tweet Plotter

(Image credit: Ccundiff12)

This project draws up some serious creativity and brings old technology to a new level by integrating IoT features. Using an old Roland DXY-990 plotter, Liege Hackerspace founders @drskullster (Jonathan Berger)  and @iooner, created a Pi-controlled plotter that can write tweets in real time. You can catch a video of the Pi powered tweet plotter in action on Reddit.

Iooner uses the Raspberry Pi to relay tweets to the plotter using HPGL code. You can adjust what parameters are used to select tweets in the source code—add a specific hashtag or even a list of hashtags.

Iooner was nice enough to share the source code for the creation. If you want to get in on this project yourself, everything you need to get started is available on github.

Raspberry Pi 3D Scanner

Raspberry Pi 3D Scanner

(Image credit: Thomas Megal / Openscan)

It’s amazing what a Raspberry Pi can do with a few attachments. Creator Thomas Megel uses more than a few extra parts for his project— an open source 3D scanner called OpenScan.

The 3D scanner uses a Raspberry Pi to rotate and scan an object. It then creates a 3D model of the scanned item using photogrammetry. Thomas uses an RPi camera but suggests trying other options like a smartphone or DSLR camera.

He’s made the OpenScan project easily accessible to the Pi community. If you want to create your own Pi powered 3D scanner, check out the kit on his website. You can follow OpenScan on Instagram and Facebook.

Remote-Controlled Underwater Pi Drone

Raspberry Pi Underwater Drone

(Image credit: Ievgenii Tkachenko)

Android developer Ievgenii Tkachenko is taking Pi projects to new heights by sinking his Raspberry Pi deep underwater. If you grew up with RC cars, you may have pictured what it would be like to pilot an RC submarine—this is essentially what Ievengii has created with his underwater Pi drone. It features motion control, lights and a camera to complete the experience.

Ievgenii’s underwater drone is propelled by four motors. To send control signals to them, he designed an Android app to interface with the Raspberry Pi. You can use a touchscreen to control the input or even a gamepad.

You follow Ievgenii’s progress and see more demonstration videos of the project on his official YouTube channel.

Chord Assist: Raspberry Pi Guitar Assistant

Chord Assist

(Image credit: Joe Birch)

The Chord Assist project was designed by Joe Birch. It uses a Raspberry Pi connected to a guitar to display chord input.

It features a refreshable braille display and can vocalize chord information—making it easier for sight-impaired persons to play, learn and tune the guitar. The project also benefits hearing-impaired artists who can read the chord output information on the LCD screen.

Chord Assist uses Google Assistant to operate. It also requires a microphone, speaker and series of displays for data readout. You can read more about the project and follow its progress on the official Chord Assist website.

Raspberry Pi Hologram Pyramid

Raspberry Pi Hologram Pyramid

(Image credit: Dan Aldred)

It’s 2019—of course you can make holograms at home. At least, that’s what school teacher Dan Aldred did when he created his Pi Hologram Machine. It all started after watching his students do something similar with an old CD case and cell phone.

It features four acrylic panels (though he recommends using glass) arranged in a pyramid shape. Hologram videos display on a screen just above the pyramid, creating a 3D image inside. He uses a Raspberry Pi A+ to control the whole operation.

If you’d like to see how it works, Dan uploaded project scripts to github. Check it out and maybe even make it yourself.

Oracle’s Pi Supercomputer

Oracel Super Computer Cluster

(Image credit: Serve The Home)

Some projects are just super—like Oracle’s Raspberry Pi Supercomputer project from the Oracle’s OpenWorld Convention in September. This Raspberry Pi cluster project was made by a team from the company. When asked why they chose to use hundreds of Pi’s instead of a virtualized arm server, they simply stated, “…a big cluster is cool.”

It features over 1,000 Raspberry Pis for a total of 1,060. The cluster is made with racks that house 21 Raspberry Pi 3B+ boards each. The system is operational—running Oracle Autonomous Linux. This project definitely isn’t built for function, but the novelty is plenty exciting.

Plynth Record Reader and Player

Plynth Raspberry Pi Record Scanner

(Image credit: Plynth.com)

If you don’t feel like searching for your music by typing, why not try record recognition? This maker, known on Reddit as sp_cecamp, created a Pi controlled record scanner that can play music for you by simply placing a record in front of it.

The Raspberry Pi uses a camera to take a photo of the album before checking it against a database. Once it has confirmed the album, it plays it for you using whatever streaming service you choose to connect.

Sp_cecamp labeled the project Plynth. You can read more about it and follow its progress on the official Plynth website.

Pi-controlled Genie Boom

(Image credit: iooner)

Words almost can’t describe the ingenuity of the Pi community. When makers see a problem—they fix it with Pi. This maker, known on Reddit as Ccundiff12, owns a Genie Boom. When it gets stuck, it takes two people to move it—one to drive a tractor and one to drive the Genie Boom.

He took this as an opportunity to control the Genie Boom with a Raspberry Pi and yes—a gamepad controller will work to steer the Genie Boom.

You can watch the Pi powered Genie Boom project in action on Reddit.

Volume 0%

03:00

00:14
06:49
PLAY SOUND

MORE: Raspberry Pi GPIO Pinout: 

https://medicalxpress.com/news/2019-12-mri-intelligence-children.html

Can MRI predict intelligence levels in children?

Can MRI predict intelligence levels in children?
Credit: Skolkovo Institute of Science and Technology

A group of researchers from the Skoltech Center for Computational and Data-Intensive Science and Engineering (CDISE) took 4th place in the international MRI-based adolescent intelligence prediction competition. For the first time ever, the Skoltech scientists used ensemble methods based on deep learning 3-D networks to deal with this challenging prediction task. The results of their study were published in the journal Adolescent Brain Cognitive Development Neurocognitive Prediction.

In 2013, the US National Institutes of Health (NIH) launched the first grand-scale study of its kind in adolescent brain research, Adolescent Brain Cognitive Development (ABCD, abcdstudy.org/), to see if and how teenagers’ hobbies and habits affect their further brain development.

Magnetic Resonance Imaging (MRI) is a common technique used to obtain images of human internal organs and tissues. Scientists wondered whether the intelligence level can be predicted from an MRI brain image. The NIH database contains a total of over 11,000 structural and functional MRI images of children aged 9-10.

NIH scientists launched an , making the enormous NIH database available to a broad community for the first time ever. The participants were given a task of building a predictive model based on brain images. As part of the competition, the Skoltech team applied neural networks for MRI image processing. To do this, they built a network architecture enabling several mathematical models to be applied to the same data in order to increase the prediction accuracy, and used a novel ensemble method to analyze the MRI data.

In their recent study, Skoltech researchers focused on predicting the intelligence level, or the so called “fluid intelligence,” which characterizes the biological abilities of the nervous system and has little to do with acquired knowledge or skills. Importantly, they made predictions for both the fluid intelligence level and the target variable independent from age, gender, brain size or MRI scanner used.

“Our team develops deep learning methods for computer vision tasks in MRI data analysis, amongst other things. In this study, we applied ensembles of classifiers to 3-D of super precision : with this approach, one can classify an image as it is, without first reducing its dimension and, therefore, without losing valuable information,” explains CDISE Ph.D. student, Ekaterina Kondratyeva.

The results of the study helped find the correlation between the child’s “fluid ” and  anatomy. Although the  accuracy is less than perfect, the models produced during this competition will help shed light on various aspects of cognitive, social, emotional and physical development of adolescents. This line of research will definitely continue to expand.

The Skoltech team was invited to present their new method at one of the world’s most prestigious medical imaging conferences, MICCAI 2019, in Shenzhen, China.


Explore further

Artificial intelligence boosts MRI detection of ADHD


More information: Marina Pominova et al, Ensemble of 3D CNN Regressors with Data Fusion for Fluid Intelligence Prediction, Adolescent Brain Cognitive Development Neurocognitive Prediction (2019). DOI: 10.1007/978-3-030-31901-4_19

https://www.techradar.com/uk/news/7-wearables-to-get-excited-for-in-2020-apple-watch-6-to-new-smart-glasses

7 wearables to get excited for in 2020: Apple Watch 6 to new smart glasses

https://www.motor1.com/news/390084/mustang-mach-e-delivery-timeline/amp/

https://phys.org/news/2019-12-matriex-imaging-simultaneously-neurons-action.html

MATRIEX imaging: Simultaneously seeing neurons in action in multiple regions of the brain

MATRIEX imaging: Simultaneously seeing neurons in action in multiple regions of the brain
Design and implementation of MATRIEX imaging: (a) Experimental diagram of the MATRIEX imaging system. The two round 3D objects in the lower-left corner are the top and bottom views of the mouse head chamber used for in vivo imaging. (Ti:Sa): Ti:Sapphire ultrafast pulsing laser; PC: Pockels cell; BE: beam expander; SM1 and SM2: x–y scanning mirrors; SL: scan lens; TL: tube lens; DM: dichroic mirror; CL: collection lens; PMT: photomultiplier tube; DO: dry objective; MOs: miniaturized objectives. (b) Photograph showing an oblique overview of the actual MATRIEX imaging system. (c) The photograph in the upper image shows a zoomed in view of the three MOs attached to the manipulating bars over the head chamber; the lower photograph was taken directly above the MOs with a smartphone camera. All MOs used in this figure are of the same model: ‘standard version.’ (d, e) Illustrations of the two-stage magnification and multiaxis coupling. The square images are actual two-photon images taken of 20-μm beads. Each red circle indicates one FOV. The model of DO used in panels (d-f) is the Olympus MPlan ×4/0.1, and all MOs in this figure are of the same customized model. (f) Illustration showing the absence of inter-FOV crosstalk under adjacent MOs. The images were taken on a uniform fluorescent plate. The red circles indicate the areas of analysis used to compare the image contrast between two conditions; the left-side condition shows the fluorescent plate under both MOs, and the right-side condition shows the fluorescence plate under only one MO. (g) Testing the optical resolution of the compound assembly with 0.51-μm beads. Curves: Gaussian fittings of raw data points. The on-axis or off-axis fluorescence intensity profiles were measured when the axis of the MO was aligned with the axis of the DO or apart from the axis of the DO (2 mm for the DO of ×4 or ×5, 3 mm for the DO of ×2.5, and 4 mm for the DO of ×2), respectively. Credit: Light: Science & Applications, doi: 10.1038/s41377-019-0219-x

Two-photon laser scanning microscopy imaging is commonly applied to study neuronal activity at cellular and subcellular resolutions in mammalian brains. Such studies are yet confined to a single functional region of the brain. In a recent report, Mengke Yang and colleagues at the Brain Research Instrument Innovation Center, Institute of Neuroscience, Center for Systems Neuroscience and Optical System Advanced Manufacturing Technology in China, Germany and the U.K. developed a new technique named the multiarea two-photon real-time in vitro explorer (MATRIEX). The method allowed the user to target multiple regions of the functional brain with a field of view (FOV) approximating 200 µm in diameter to perform two-photon Ca2+ imaging with single-cell resolution simultaneously across all regions.

Yang et al. conducted real-time functional imaging of single-neuron activities in the , primary motor cortex and hippocampal CA1 region during anesthetized and awake states in mice. The MATRIEX technique can uniquely configure multiple microscopic FOVs using a single laser scanning device. As a result, the technique can be implemented as an add-on optical module within existing conventional single-beam-scanning, two-photon microscopes without additional modifications. The MATRIEX can be applied to explore multiarea neuronal activity in vivo for brain-wide neural circuit function with single-cell resolution.

Two-photon laser microscopy originated in the 1990s to become popular among neuroscientists interested in studying neural structures and functions in vivo. A major advantage of two-photon and three-photon imaging for living brains include the optical resolution achieved across densely labelled brain tissues that strongly scatter light, during which optically sectioned image pixels can be scanned and acquired with minimal crosstalk. However, the advantages also caused significant drawbacks to the method by preventing the simultaneous view of two objects within a specific distance. Researchers had previously implemented many strategies to extend the limits, but the methods were difficult to implement in neuroscience research labs. Nevertheless, an increasingly high demand exists in neuroscience to investigate brain-wide neuronal functions with single-cell resolution in vivo.

MATRIEX imaging: Simultaneously seeing neurons in action in multiple regions of the brain
LEFT: Experimental diagram of the MATRIEX imaging system. The two round 3D objects in the lower-left corner are the top and bottom views of the mouse head chamber used for in vivo imaging. (Ti:Sa): Ti:Sapphire ultrafast pulsing laser; PC: Pockels cell; BE: beam expander; SM1 and SM2: x–y scanning mirrors; SL: scan lens; TL: tube lens; DM: dichroic mirror; CL: collection lens; PMT: photomultiplier tube; DO: dry objective; MOs: miniaturized objectives. RIGHT: Illustrations of the two-stage magnification and multiaxis coupling. The square images are actual two-photon images taken of 20-μm beads. Each red circle indicates one FOV. Credit: Light: Science & Applications, doi: 10.1038/s41377-019-0219-x

In a straightforward approach, scientists can place two microscopes above the same animal brain to image the cortex and cerebellum simultaneously. But such efforts can lead to substantial increases in complexity and cost. The existing high expectations for performance and feasibility therefore pose a highly challenging engineering question on how a single imaging system can simultaneously obtain live microscopic images from multiple brain regions in vivo. To address the question, Yang et al. introduced a new method that combined two-stage magnification and multi-axis optical coupling.

They realized the method using a low-magnification dry objective (DO), with multiple water-immersed, miniaturized objectives (MOs) under the dry objective. The scientists placed each of the MOs at the desired target position and depth in the brain tissue. The team used the new compound object assembly similarly to the original water-immersed microscope objective without additional modifications to the image scanning and acquisition subsystem.

MATRIEX imaging: Simultaneously seeing neurons in action in multiple regions of the brain
TOP: Configuring the MOs with different parameters to target object planes at different depths to then be conjugated on the same image plane. Each gray cylinder represents one lens with a pitch value, front working distance (L1), back working distance (L2) and length (Z). BOTTOM: Demonstration of MATRIEX imaging: structural imaging in multiple brain areas in vivo. a Left image: a full-frame image including two FOVs in the frontal association cortex (FrA) and the cerebellum. The red and yellow circles indicate two FOVs that are digitally enlarged and shown in the upper-right and lower-right images. A GAD67-GFP transgenic mouse (with the interneurons labeled brain-wide) was used. Two MOs (‘standard version’) were placed at the same depth under a DO (Mitutoyo ×2/0.055). b Example configuration of three FOVs in the cortex of a Thy1-GFP transgenic mouse (with layer 5 cortical neurons specifically labeled and with tuft dendrites visible near the cortical surface). Three MOs (‘standard version’) were placed at the same depth under a DO (Olympus ×4/0.1). Credit: Light: Science & Applications, doi: 10.1038/s41377-019-0219-x

The research team first assembled the MATRIEX compound objective. For this, they replaced the conventional water-immersion microscope objective with a customized compound objective assembly, inside a two-photon laser scanning microscope equipped with a conventional single-beam raster scanning device. The compound assembly contained multiple MOs (miniaturized objectives) inserted through multiple craniotomies during which the scientists glued a 3-D printed plastic chamber to the skull of the mouse model. The chamber roughly aligned the MOs with the same space to adjust lateral position and depth. Yang et al. precisely manipulated the individual MOs to view the objects under all MOs simultaneously in the same image plane.

They implemented the MATRIEX method using two principles; two-stage magnification and multiaxis coupling. For example, using two-stage magnification with the dry objective (DO) alone, they observed 20 µm beads as tiny blurry dots while observing crisp, round circles through the compound assembly. During multiaxis coupling, the scientists coupled a single DO with multiple MOs on the same image plane. Using a simple raster scan in a single rectangular frame, the research team acquired a rectangular image containing multiple circular FOVs (Field of Views) – where each FOV corresponded to one MO with minimal inter-FOV pixel crosstalk.

MATRIEX imaging: Simultaneously seeing neurons in action in multiple regions of the brain
Demonstration of MATRIEX imaging: simultaneously acquiring live neuronal activity patterns in V1, M1, and hippocampal CA1 in mice in the anesthetized state or awake state. The neurons were labeled by a genetically encoded fluorescent Ca2+ indicator, GCaMP6f (a) Illustration showing the positioning of three MOs over the V1, M1 and hippocampal CA1 regions in a model mouse brain. (b) A camera photograph taken through the microscope ocular lens under white light bright-field illumination, in which three FOVs are readily visible. The upper region is V1, the lower-left region is CA1, and the lower-right region is M1. (c) A two-photon image, which is an average of 100 frames, acquired by simple full-frame raster scanning with a two-photon microscope. The solid white boxes show the three parts of the image that are enlarged in panel (d). (d) Digitally enlarged individual FOVs showing neurons in V1, M1, and CA1, from top to bottom. Scale bar: 40 μm. (e) Time-lapse Ca2+ signal traces of five example cells from each region, with each labeled by the cell index. Recordings of the same cell in the same animal in the anesthetized state (left side) and in the awake state (right side) are shown. (f) Left: traces showing individual Ca2+ signal events (split from each onset time and overlaid) from randomly selected example cells. Middle: Ca2+ signal traces of each of the neuropil zones that are directly adjacent to each of the example cells. Right: three box plots comparing the neuronal Ca2+ signal event amplitude to the neuron’s adjacent neuropil Ca2+ signal amplitude; paired Wilcoxon rank sum test, ***P < 0.001. (g) Log-normal fitting of the distribution histograms of the spontaneous Ca2+ event amplitude for data pooled from all animals. The red bars and fitted curve show the distribution of data recorded in the awake state, and the blue bars and fitted curve show the distribution of data recorded in the anesthetized state. (h) Pairwise neuronal activity correlation (Pearson correlation coefficients) for data pooled from all animals. The red bars show the distribution of data recorded in the awake state, and the blue bars show the distribution of data recorded in the anesthetized state. Credit: Light: Science & Applications, doi: 10.1038/s41377-019-0219-x

The scientists credited the magnification of the numerical aperture (NA) for allowing better resolution with the compound assembly. The associated lenses were also flexible and custom-designed for mass-production at low cost to assist . The main feature of MATRIEX was its capacity to image multiple objects simultaneously at large depth intervals. To highlight this, Yang et al. designed different MOs with diverse parameters, placing them at a specific depth where the corresponding object planes conjugated on the same axis. In practice, the research team compensated minor mismatches between the desired and actual object depth by adjusting MOs individually along each of the z axes.

Typically, under the DO (dry objective) the maximum lateral size of the target zone is limited by the maximum size of the scanning field. For example, using a DO with a 2x magnification and target zone of 12 mm in diameter, scientists can image an entire adult mouse brain. In this study, Yang et al. simultaneously imaged the frontal association cortex and cerebellum of the mouse. In practice, a 4x air objective was suited to achieve better resolution to observe fine dendrite structures.

MATRIEX imaging: Simultaneously seeing neurons in action in multiple regions of the brain
Simultaneous calcium imaging in the V1, M1 and CA1 regions using MATRIEX during anesthetized and awake states in mice. View full movie on Credit: Light: Science & Applications, doi: 10.1038/s41377-019-0219-x

As proof of principle, the research team used MATRIEX to perform simultaneous two-photon Ca2+ imaging of fluorescently-labelled neurons in the primary visual cortex (V1 region), primary motor cortex (M1 region) and hippocampal CA1 region of mice. In the configuration of the three MOs, the scientists placed two MOs suited for the V1 and M1 region, directly above the cortex and inserted an MO within the hippocampal CA1 region after surgically removing a cortical tissue. The team then designed the lenses for the object planes corresponding to V1, M1 and CA1 for conjugation on the same image plane. Using a two-photon microscope equipped with a 12 kHz resonant scanner, the scientists scanned the full image to observe three FOVs and their single cells after enlarging the three different sections to resolve single neurons. Then they noted the laser power to be distributed among multiple FOVs.

While Yang et al. could have obtained these results using conventional single-FOV imaging within a single brain region, the MATRIEX technique provided them data beyond those offered with single-FOV imaging techniques. Taken together, these results allowed a highly inhomogeneous distribution and transformation of spontaneous activity patterns from the anesthetized state to the awake state in mice, spanning a brain-wide circuit level at single-cell resolution.

In this way, Menge Yang and co-workers developed the MATRIEX technique based on the principle of two-stage magnification and multiaxis optical coupling. They simultaneously conducted two-photon Ca2+ imaging in neuronal population activities at different depths in diverse regions (V1, M1 and CA1) in anesthetized and awake mice with single-cell resolution. Importantly, any conventional two-photon microscope can be transformed into a MATRIEX microscope, while preserving all original functionalities. The key to transformation is based on the design of a compound objective assembly. The researchers can use different, carefully designed MOs to suit diverse brain regions with 100 percent compatibility between the MATRIEX technique and conventional microscopy. The research team expect the MATRIEX technique to substantially advance three-dimensional, brain-wide neural circuit dynamics at single-cell resolution.


Explore further

Bringing faster 3-D imaging for biomedical researches


More information: Mengke Yang et al. MATRIEX imaging: multiarea two-photon real-time in vivo explorer, Light: Science & Applications (2019). DOI: 10.1038/s41377-019-0219-xTianyu Wang et al. Three-photon imaging of mouse brain structure and function through the intact skull, Nature Methods (2018). DOI: 10.1038/s41592-018-0115-y

Rongwen Lu et al. Video-rate volumetric functional imaging of the brain at synaptic resolution, Nature Neuroscience (2017). DOI: 10.1038/nn.4516