https://www.zmescience.com/science/rydberg-polaron-atom/

Researchers create gigantic atom filled with 100 other atoms

Like a bag of chips, atoms are mostly empty space. However, a highly exotic state of matter that was recently created by a team from the  Vienna University of Technology, the Harvard University, and Rice University, Texas, flips that notion on its head — dubbed a Rydberg polaron, this giant atom is filled with other atoms.

Big atom schematic.

The excided electron (blue) orbits its nucleus (red), and encloses other atoms of the Bose-Einstein-condensate (green).
Image credits TU Wien.

Atoms are made up of big bits, called protons and neutrons, that coalesce at its core to make the nucleus, and tiny bits named electrons, which orbit the core at great speeds. What holds them together is electromagnetics — the protons hold a positive charge, the electrons are negatively charged, so they attract one another. Neutrons don’t carry a charge so nobody much minds them.

Between the nucleus and the electrons, there’s a wide, thick helping of nothing. Literally. As far as we know, there is only empty space between the two, which means atoms are pretty much empty space. However, an international team of researchers wants to change that.

The biggest of the small

They started out from two different, and very extreme, fields of atomic physics: Bose-Einstein condensates and Rydberg atoms. A Bose-Einstein condensate is a state of matter created from ultracold bosons — close to absolute zero, or 0° Kelvin (which is -273.15° Celsius, or -459.67° Fahrenheit). That’s cold enough that quantum processes would find it hard to continue, meaning everything is more stable, denser, and certain aspects of quantum physics become observable. Rydberg atoms are also mostly empty space, but with a twist. One of their electrons is highly excited (to read: pumped full of energy) and orbits the nucleus at a very great distance:

“The average distance between the electron and its nucleus can be as large as several hundred nanometres – that is more than a thousand times the radius of a hydrogen atom,” said Professor Joachim Burgdörfer of Vienna University, co-author of the paper, in a statement.

The team put the two together by supercooling strontium and then using a laser to transfer energy to one atom and transform it into a Rydberg atom. Because the excited electron orbited at a much longer radius than in a typical atom, and because everything was clumped in so tightly, it ended up orbiting not just its nucleus, but as many as 170 other strontium atoms that were stuck between the two.

“The atoms do not carry any electric charge, therefore they only exert a minimal force on the electron,” said co-author Shuhei Yoshida.

Even these tiny tugs can steer the electron very slightly, however, without pushing it out of its orbit. This interaction consumes part of the total energy in the system, and that’s something chemistry likes and tries to perpetuate — so a bond forms between the Rydberg atom and those caught in its orbit.

“It is a highly unusual situation,” Yoshida explained. “Normally, we are dealing with charged nuclei, binding electrons around them. Here, we have an electron, binding neutral atoms.”

This exotic state of matter, dubbed a Ryberg polaron, is only stable while everything stays close to 0° K. If things get too heated, the particles start moving faster and the bond shatters.

“For us, this new, weakly bound state of matter is an exciting new possibility of investigating the physics of ultracold atoms,” said Burgdörfer. “That way one can probe the properties of a Bose-Einstein condensate on very small scales with very high precision.”

The paper “Creation of Rydberg Polarons in a Bose Gas” has been published in the journal Physical Review Letters.

https://www.raspberrypi.org/blog/futurelearn-scratch-to-python/

TRANSITION FROM SCRATCH TO PYTHON WITH FUTURELEARN

With the launch of our first new free online course of 2018 — Scratch to Python: Moving from Block- to Text-based Programming — two weeks away, I thought this would be a great opportunity to introduce you to the ins and outs of the course content so you know what to expect.

Take the plunge into text-based programming

The idea for this course arose from our conversations with educators who had set up a Code Club in their schools. Most people start a club by teaching Scratch, a block-based programming language, because it allows learners to drag and drop blocks of pre-written code into a window to create a program. The blocks automatically snap together, making it easy to build fun and educational projects that don’t require much troubleshooting. You can do almost anything a beginner could wish for with Scratch, even physical computing to control LEDs, buzzers, buttons, motors, and more!

Scratch to Python FutureLearn Raspberry Pi

However, on our face-to-face training programme Picademy, educators told us that they were finding it hard to engage children who had outgrown Scratch and needed a new challenge. It was easy for me to imagine: a young learner, who once felt confident about programming using Scratch, is now confused by the alien, seemingly awkward interface of Python. What used to take them minutes in Scratch now takes them hours to code, and they start to lose interest — not a good result, I’m sure you’ll agree. I wanted to help educators to navigate this period in their learners’ development, and so I’ve written a course that shows you how to take the programming and thinking skills you and your learners have developed in Scratch, and apply them to Python.

Scratch to Python FutureLearn Raspberry Pi

Who is the course for?

Educators from all backgrounds who are working with secondary school-aged learners. It will also be interesting to anyone who has spent time working with Scratch and wants to understand how programming concepts translate between different languages.

“It was great fun, and I thought that the ideas and resources would be great to use with Year 7 classes.”
Sue Grey, Classroom Teacher

What is covered?

After showing you the similarities and differences of Scratch and Python, and how the skills learned using one can be applied to the other, we will look at turning more complex Scratch scripts into Python programs. Through creating a Mad Libs game and developing a username generator, you will see how programs can be simplified in a text-based language. We will give you our top tips for debugging Python code, and you’ll have the chance to share your ideas for introducing more complex programs to your students.

Scratch to Python FutureLearn Raspberry Pi

After that, we will look at different data types in Python and write a script to calculate how old you are in dog years. Finally, you’ll dive deeper into the possibilities of Python by installing and using external Python libraries to perform some amazing tasks.

By the end of the course, you’ll be able to:

  • Transfer programming and thinking skills from Scratch to Python
  • Use fundamental Python programming skills
  • Identify errors in your Python code based on error messages, and debug your scripts
  • Produce tools to support students’ transition from block-based to text-based programming
  • Understand the power of text-based programming and what you can create with it

Where can I sign up?

The free four-week course starts on 12 March 2018, and you can sign up now on FutureLearn. While you’re there, be sure to check out our other free courses, such as Prepare to Run a Code ClubTeaching Physical Computing with a Raspberry Pi and Python, and our second new course Build a Makerspace for Young People — more information on it will follow in tomorrow’s blog post.

https://news.ubc.ca/2018/02/26/pros-cons-of-six-online-aids-for-coping-with-sleep-loss-in-move-to-daylight-savings-time/

Pros, cons of six online aids for coping with sleep loss

Lynda Eccott, a senior instructor in the faculty of pharmaceutical sciences at UBC, was quoted in a
Globe and Mail article about six DIY sleep aids that can help people cope with the change to daylight saving time.

Eccott said studies on valerian, a popular herbal sleep aid, showed only small positive effects.

https://news.ubc.ca/2018/02/26/tune-in-tune-out-psychologists-say-music-helps-athletes-perform/

Tune in, tune out: psychologists say music helps athletes perform

UBC research was featured in a CBC story on the effect of music on athletic performance.

A recent study from UBC’s Okaganan campus found that participants who did multiple short, intense bouts of exercise didn’t feel like they worked any harder when listening to music.

“Music can be a way of improving your ability to physically engage in exercise, but it can also… allow you to have better enjoyment of the exercise,” said PhD candidate Matthew Stork, one of the researchers behind the study.

https://www.forbes.com/sites/brookecrothers/2018/02/25/chevy-bolt-2018-vs-tesla-model-3-best-small-ev-and-most-popular-ev-cheat-sheet/#7e73bc8b5578

Chevy Bolt (2018) Vs. Tesla Model 3: ‘Best Small’ EV And Most Popular EV (Cheat Sheet)

Credit: Tesla

Single center screen punctuates the Model 3’s minimalist interior.

What follows is a brief back-of-the-envelope crib sheet for the two highest-profile, mass-market, long-range EVs.

The Chevy Bolt just won Consumer Reports coveted “best compact green car” and one of the publications “Top Picks of 2018: Best Cars of the Year.”

The Tesla Model 3, on the other hand, appears to be the most popular of the two, based on the number of reservations and the religious following it commands.

That said, so far Tesla has yet to hit the 10,000-cars-made mark, while Chevy has delivered over 23,000 Bolts (by the end of 2017 — and Chevy delivered 1,177 Bolts in January 2018 so that number may jump a couple thousand by the end of February.)

The vehicles address two different consumers: the Model 3 is a sporty, somewhat pricey sedan while the Bolt is marketed as a more practical crossover-like hatchback.

The Model 3 starts at $35,000 (though many buyers are expected to opt for add-ons that kick up the price into the $40,000- and $50,000-plus range). The rear-drive (single motor) Long Range Battery variant — being delivered now — adds the 310-mile-range battery pack, top speed of 140 mph, and 5.1 second 0–60*. (Enhanced Autopilot, premium upgraded interior, and other options are also available.)

*See Edmunds Tesla Model 3 Track Test (published on Feb 18, 2018).

Credit: Brooke Crothers

Chevy Bolt Premier.

The Bolt is listed at about $37,500 for the LT model and roughly $44,000 for the Premier. That latter adds low-speed forward automatic braking, forward collision alert, “rear-camera mirror,” upgraded wheels, and leather seats, among other things.

Last summer and fall (2017), there were aggressive discounts on the Bolt. But that changed in 2018. While incentives still exist, you’re not going to get the kind of killer deals offered last summer (especially on leases). Of course, that could change at any time in the future and there may be regional exceptions (deals) that pop up.

In addition to the federal incentives, a number of states will cut you a check if you lease or buy a Bolt or Model 3. California, for example, will cut you a check for $2,500.

To see full table on smaller screens: use slider bar at bottom of table or, on smartphones, slide screen with finger or flip phone screen to landscape view to see full table.

Car Range Charging Autonomous driving Buying Experience Safety Performance Pass./Cargo Space Availability Price
Tesla Model 3 Vs. Chevy Bolt
Model 3 220 miles Tesla’s vast Supercharging network Necessary hardware Good N/A Zero to 60 mph under 6 seconds 5 adults Now: 2018 in volume (1) $35,000
Bolt 238 miles Spotty DC fast-charging stations (2) Not available currently (3) Fair (4) IIHS Top Safety Pick (5) Zero to 60 in less than 6.5 seconds / top speed 91 mph 5 passengers Now $37,495 but incentives lower price

Table notes:

(1) This week Model 3 reservation holders who have not been Tesla ownersare, for the first time, getting invitations to configure their vehicles.

(2) In Los Angeles (where I live) and surrounding areas, DC fast-charging from vendors like EVgo is a mixed bag: the fast-charge experience is smooth and pretty much works as advertised but fast-charging stations can be few and far between — even in greater Los Angeles. And note that Chevy says “up to 90 miles of range in about 30 minutes of charge” for DC fast charge but real-world charging can be slower. Bottom line is, the Chevy Bolt-compatible fast charge network pales in comparison to the reach and convenience of Tesla’s Supercharging network. The countervailing argument is that most Chevy Bolt owners will charge at home.

(3) The Chevy Volt plug-in hybrid does, however, offer a degree of autonomous driving on the Premier version in the form of adaptive cruise control. That Bolt does not have ACC.

(4) Fair to middling: I’ve experienced never-say-die hard sells for the Bolt. And many of the Chevy dealer sales people (I’ve met) are woefully unschooled in EVs.

(5) Insurance Institute for Highway Safety and here’s the 2018 Chevy Bolt link.

http://www.infosurhoy.com/cocoon/saii/xhtml/en_GB/science/ai-computer-recreates-the-face-youre-thinking-about/

AI computer recreates the face you’re thinking about

Psychologists have create a creepy machine that can peer into your mind’s eye with incredible accuracy.

Their AI studies electrical signals in the brain to recreate faces being looked at by volunteers.

It could provide a means of communication for people who are unable to talk, as well as the development of prosthetics controlled by thoughts.

The finding also opens the door to strange future scenarios, such as those portrayed in the series ‘Black Mirror’, where anyone can record and playback their memories.

Test subjects were hooked up to electroencephalography (EEG) equipment by neuroscientists at the University of Toronto Scarborough.

This recorded their brain activity as they were shown images of faces.

This information was then used to digitally recreate the image, using a specially designed piece of software.

The breakthrough relies on neural networks, computer systems which simulate the way the brain works in order to learn.

These networks can be trained to recognise patterns in information – including speech, text data, or visual images – and are the basis for a large number of the developments in artificial intelligence (AI) over recent years.

The team’s AI was first trained to recognise patterns in pictures of faces, by studying a huge database of images.

Once it was able to recognise the characteristics that make up a human face, it was then trained to associate them with specific EEG brain activity patterns.

By matching the brain activity it observed in the test subjects with this information, the AI could then reproduce what they were seeing.

The result was extremely accurate reproductions of the faces being observed by the volunteers.

Speaking about the results Adrian Nestor, co-author of the study, said: ‘What’s really exciting is that we’re not reconstructing squares and triangles but actual images of a person’s face, and that involves a lot of fine-grained visual detail.

‘The fact we can reconstruct what someone experiences visually based on their brain activity opens up a lot of possibilities.

‘It unveils the subjective content of our mind and it provides a way to access, explore and share the content of our perception, memory and imagination.’

The method was pioneered by Professor Nestor, who has successfully reconstructed facial images from functional magnetic resonance imaging (fMRI) data in the past.

This is the first time EEG, which is more common, portable, and inexpensive by comparison, has been used.

EEG also has greater resolution over time, meaning it can measure with detail how a perception develops down to the millisecond.

Researchers estimate that it takes our brain about 170 milliseconds (0.17 seconds) to form a good representation of a face we see.

The EEG technique was developed by Dan Nemrodov, a postdoctoral fellow at Professor Nestor’s lab.

 He said: ‘When we see something, our brain creates a mental percept, which is essentially a mental impression of that thing.

‘We were able to capture this percept using EEG to get a direct illustration of what’s happening in the brain during this process.

‘fMRI captures activity at the time scale of seconds, but EEG captures activity at the millisecond scale.

‘So we can see with very fine detail how the percept of a face develops in our brain using EEG.’

The full findings of the study were published in the journal eNeuro.

Previous breakthroughs in this area have relied on fMRI scans, which monitor changes in blood flow in the brain, rather than electrical activity.

In January, 2018, Japanese scientists revealed a similar device which made use of fMRI to recreate objects being looked at, and even thought about.

Researchers from the Kamitani Lab at Kyoto University, led by Professor Yukiyasu Kamitani, used a neural network to create images based on information taken from fMRI scans, which detect changes in blood flow to analyse electrical activity.

Using this data, the machine was able to reconstruct owls, aircraft, stained-glass windows and red postboxes after three volunteers stared at the pictures.

It also produced pictures of objects including squares, crosses, goldfish, swans, leopards and bowling balls that the participants imagined.

Although the accuracy varied from person to person, the breakthrough opens a ‘unique window into our internal world’, according to the Kyoto team.

The technique could theoretically be used to create footage of daydreams, memories and other mental images.

It could also help patients in permanent vegetative states to communicate with their loved ones.

Writing in a paper published in the online print repository BioRxiv, its authors said: ‘Here, we present a novel image reconstruction method, in which the pixel values of an image are optimized to make its Deep Neural Network features similar to those decoded from human brain activity at multiple layers.

‘We found that the generated images resembled the stimulus images (both natural images and artificial shapes) and the subjective visual content during imagery.

‘While our model was solely trained with natural images, our method successfully generalized the reconstruction to artificial shapes, indicating that our model indeed ‘reconstructs’ or ‘generates’ images from brain activity, not simply matches to exemplars.’

The Kyoto team’s deep neural network was trained using 50 natural images and the corresponding fMRI results from volunteers who were looking at them.

This recreated the images viewed by the volunteers.

They then used a second type of AI called a deep generative network to check that they looked like real images, refining them to make them more recognisable.

Professor Kamitani previously hit the headlines after his fMRI ‘decoder’ was able to identify objects seen or imagined by volunteers with a high degree of accuracy.

The researchers built on the idea that a set of hierarchically-processed features can be used to determine an object category, such as ‘turtle’ or ‘leopard.’

Such category names allow computers to recognise the objects in an image, the researchers explained in a paper published by Nature Communications.

Subjects were shown natural images from the online image database ImageNet, spanning 150 categories.

Then, the trained decoders were used to predict the visual features of objects – even for objects that were not used in the training from the brain scans.

When shown the same image, the researchers found that the brain activity patterns from the human subject could be translated into patterns of simulated neurons in the neural network.

This could then be used to predict the objects.

 

http://www.tunisiesoir.com/science/spanish-scientists-turn-light-upside-down-2311-2018/

Researchers developed ‘hyperbolic metasurface’

Spanish scientists turn light upside down
Spanish scientists turn light upside down

Researchers in Spain have developed a new material, a so-called hyperbolic metasurface, that inverts light waves.

Scientists from CIC nanoGUNE (San Sebastian, Spain) and collaborators have reported in Science the development of a so-called hyperbolic metasurface on which light propagates with completely reshaped wafefronts. This scientific achievement toward more precise control and monitoring of light is highly interesting for miniaturizing optical devices for sensing and signal processing.

Optical waves propagating away from a point source typically exhibit circular (convex) wavefronts. “Like waves on a water surface when a stone is dropped,” says Peining Li, EU Marie Sklodowska-Curie fellow at nanoGUNE and first author of the paper. The reason for this circular propagation is that the medium through which light travels is typically homogeneous and isotropic, i.e., uniform in all directions.

Researchers had theoretically predicted that specifically structured surfaces can turn the wavefronts of light upside-down when it propagates along them. “On such surfaces, called hyberbolic metasurfaces, the waves emitted from a point source propagate only in certain directions, and with open (concave) wavefronts,” explains Javier Alfaro, Ph.D. student at nanoGUNE and co-author of the paper. These unusual waves are called hyperbolic surface polaritons. Because they propagate only in certain directions, and with wavelengths that are much smaller than that of light in free space or standard waveguides, they could help to miniaturize optical devices for sensing and signal processing.

Now, the scientists have developed such a metasurface for infrared light. It is based on boron nitride, a graphene-like 2-D material, which was selected because of its ability to manipulate infrared light on extremely small length scales. This has applications in miniaturized chemical sensors or for heat management in nanoscale optoelectronic devices. The researchers directly observed the concave wavefronts with a special optical microscope.

Hyperbolic metasurfaces are challenging to fabricate, because an extremely precise structuring on the nanometer scale is required. Irene Dolado, Ph.D. student at nanoGUNE, and Saül Vélez, former postdoctoral researcher at nanoGUNE (now at ETH Zürich) mastered this challenge using electron beam lithography and etching of thin flakes of high-quality boron nitride provided by Kansas State University. “After several optimization steps, we achieved the required precision and obtained grating structures with gap sizes as small as 25 nm,” Dolado says. “The same fabrication methods can also be applied to other materials, which could pave the way to realize artificial metasurface structures with custom-made optical properties,” adds Saül Vélez.

To see how the waves propagate along the metasurface, the researchers used a state-of the-art infrared nanoimaging technique that was pioneered by the nanoptics group at nanoGUNE. They first placed an infrared gold nanorod onto the metasurface. “It plays the role of a stone dropped into water,” says Peining Li. The nanorod concentrates incident infrared light into a tiny spot, which launches waves that then propagate along the metasurface. With the help of a so-called scattering-type scanning near-field microscope (s-SNOM) the researchers imaged the waves. “It was amazing to see the images. They indeed showed the concave curvature of the wavefronts that were propagating away form the gold nanorod, exactly as predicted by theory,” says Rainer Hillenbrand, Ikerbasque Professor at nanoGUNE, who led the work.

The results promise nanostructured 2-D materials to become a novel platform for hyberbolic metasurface devices and circuits, and further demonstrate how near-field microscopy can be applied to unveil exotic optical phenomena in anisotropic materials and for verifying new metasurface design principles.