https://phys.org/news/2022-01-nano-architected-material-refracts-important-photonic.html

JANUARY 28, 2022

Nano-architected material refracts light backward; an important step toward creating photonic circuits

by California Institute of Technology

Nano-architected material refracts light backward; an important step toward creating photonic circuits
Scanning Electron Microscopy (SEM) image of the nanoscale lattice. Credit: California Institute of Technology

A newly created nano-architected material exhibits a property that previously was just theoretically possible: it can refract light backward, regardless of the angle at which the light strikes the material.https://7e43784b26cd990d65caaa4a107dd588.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

This property is known as negative refraction and it means that the refractive index—the speed that light can travel through a given material—is negative across a portion of the electromagnetic spectrum at all angles.

Refraction is a common property in materials; think of the way a straw in a glass of water appears shifted to the side, or the way lenses in eyeglasses focus light. But negative refraction does not just involve shifting light a few degrees to one side. Rather, the light is sent in an angle completely opposite from the one at which it entered the material. This has not been observed in nature but, beginning in the 1960s, was theorized to occur in so-called artificially periodic materials—that is, materials constructed to have a specific structural pattern. Only now have fabrication processes have caught up to theory to make negative refraction a reality.

“Negative refraction is crucial to the future of nanophotonics, which seeks to understand and manipulate the behavior of light when it interacts with materials or solid structures at the smallest possible scales,” says Julia R. Greer, Caltech’s Ruben F. and Donna Mettler Professor of Materials Science, Mechanics and Medical Engineering, and one of the senior authors of a paper describing the new material. The paper was published in Nano Letters on October 21.

The new material achieves its unusual property through a combination of organization at the nano- and microscale and the addition of a coating of a thin metal germanium film through a time- and labor-intensive process. Greer is a pioneer in the creation of such nano-architected materials, or materials whose structure is designed and organized at a nanometer scale and that consequently exhibit unusual, often surprising properties—for example, exceptionally lightweight ceramics that spring back to their original shape, like a sponge, after being compressed.

Under an electron microscope, the new material’s structure resembles a lattice of hollow cubes. Each cube is so tiny that the width of the beams making up the cube’s structure is 100 times smaller than the width of a human hair. The lattice was constructed using a polymer material, which is relatively easy to work with in 3D printing, and then coated with the metal germanium.https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-0536483524803400&output=html&h=280&slotname=5350699939&adk=2265749427&adf=780081655&pi=t.ma~as.5350699939&w=750&fwrn=4&fwrnh=100&lmt=1643497960&rafmt=1&psa=1&format=750×280&url=https%3A%2F%2Fphys.org%2Fnews%2F2022-01-nano-architected-material-refracts-important-photonic.html&flash=0&fwr=0&rpe=1&resp_fmts=3&wgl=1&uach=WyJtYWNPUyIsIjEwLjExLjYiLCJ4ODYiLCIiLCI5Ny4wLjQ2OTIuOTkiLFtdLG51bGwsbnVsbCwiNjQiXQ..&dt=1643497959840&bpp=15&bdt=1019&idt=496&shv=r20220126&mjsv=m202201200301&ptt=9&saldr=aa&abxe=1&cookie=ID%3D6d20cec83a9677a1-22c493fe55c20058%3AT%3D1623616277%3AR%3AS%3DALNI_MYGjh5ycHtkxpZragWDgq9gSWG1KA&correlator=844494862041&frm=20&pv=2&ga_vid=981691580.1517602527&ga_sid=1643497960&ga_hid=1850560158&ga_fc=1&u_tz=-480&u_his=1&u_h=1050&u_w=1680&u_ah=980&u_aw=1680&u_cd=24&u_sd=1&dmc=2&adx=334&ady=2328&biw=1678&bih=900&scr_x=0&scr_y=0&eid=21067496&oid=2&pvsid=1089192360390728&pem=46&tmod=1968079664&uas=0&nvt=1&ref=https%3A%2F%2Fnews.google.com%2F&eae=0&fc=896&brdim=1%2C23%2C1%2C23%2C1680%2C23%2C1678%2C980%2C1678%2C900&vis=1&rsz=%7C%7CpeEbr%7C&abl=CS&pfx=0&fu=128&bc=31&ifi=1&uci=a!1&btvi=1&fsb=1&xpc=0KhCrwhRub&p=https%3A//phys.org&dtd=616

“The combination of the structure and the coating give the lattice this unusual property,” says Ryan Ng (MS ’16, Ph.D. ’20), corresponding author of the Nano Letters paper. Ng conducted this research while a graduate student in Greer’s lab and is now a postdoctoral researcher at the Catalan Institute of Nanoscience and Nanotechnology in Spain. The research team zeroed in on the cube-lattice structure and material as the right combination through a painstaking computer modeling process (and the knowledge that geranium is a high-index material).

To get the polymer coated evenly at that scale with a metal required the research team to develop a wholly new method. In the end, Ng, Greer, and their colleagues used a sputtering technique in which a disk of germanium was bombarded with high-energy ions that blasted germanium atoms off of the disk and onto the surface of the polymer lattice. “It isn’t easy to get an even coating,” Ng says. “It took a long time and a lot of effort to optimize this process.”

The technology has potential applications for telecommunications, medical imaging, radar camouflaging, and computing.

In 1965 observation, Caltech alumnus Gordon Moore (Ph.D. ’54), a life member of the Caltech Board of Trustees, predicted that integrated circuits would get twice as complicated and half as expensive every two years. However, because of the fundamental limits on power dissipation and transistor density allowed by current silicon semiconductors, the scaling predicted by Moore’s Law should soon end. “We’re reaching the end of our ability to follow Moore’s Law; making electronic transistors as small as they can go,” Ng says. The current work is a step towards demonstrating optical properties that would be required to enable 3D photonic circuits. Because light moves much more quickly than electrons, 3D photonic circuits, in theory, would be much faster than traditional ones.

The Nano Letters paper is titled “Dispersion Mapping in 3-Dimensional Core–Shell Photonic Crystal Lattices Capable of Negative Refraction in the Mid-Infrared.”


Explore furtherNew materials exhibit split personality


More information: Victoria F. Chernow et al, Dispersion Mapping in 3-Dimensional Core–Shell Photonic Crystal Lattices Capable of Negative Refraction in the Mid-Infrared, Nano Letters (2021). DOI: 10.1021/acs.nanolett.1c02851Journal information:Nano LettersProvided by California Institute of Technology

https://phys.org/news/2022-01-revolutionizing-satellite-power-laser.html


JANUARY 28, 2022

Revolutionizing satellite power using laser beaming

by University of Surrey

Revolutionizing satellite power using laser beaming
Wireless power beaming will provide auxiliary power to increase the baseline efficiency of small satellites in lower Earth orbit. Credit: Space Power

The University of Surrey and Space Power are tackling the problem of powering satellites in Low Earth Orbit (LEO) during their eclipse period when they cannot see the sun. By collaborating on a space infrastructure project, the joint team will develop new technology which uses lasers to beam solar power from satellites under solar illumination to small satellites orbiting closer to Earth during eclipse. The wireless, laser-based power beaming prototype will be the first developed outside of governmental organizations and is aiming for commercialisation by 2025.

Wireless power beaming is a critical and disruptive technology for space infrastructure and will provide auxiliary power to increase the baseline efficiency of small satellites in LEO. The technical side of the project will use the highly specialized laser laboratories and optical systems developed at the University of Surrey’s Department of Physics and Advanced Technology Institute, which are world leaders in the development and implementation of laser and photovoltaic-based technologies. The first Space Power product will be designed as a plug-and-play system for satellite manufacturers to include in their offering to their LEO constellation customers.

Without new power technologies like this, which will enable small satellites to function all the time, more satellites are needed, with the resultant costs, launch emissions and contribution to space debris. As humanity finds more ambitious and useful tasks for small satellites, the problem grows.

The project is part of the £7.4 million national SPRINT (SPace Research and Innovation Network for Technology) program. SPRINT provides unprecedented access to university space expertise and facilities and helps businesses through the commercial exploitation of space data and technologies. The power beaming prototype work follows on from an initial feasibility study by Space Power and the University of Surrey on laser transmission funded through the SME Innovation Voucher scheme. Now, the team will investigate and verify the efficiency benefits of laser-based power beaming, develop the new technology, and obtain data to enable them to design a prototype for small satellites in space.

Professor Stephen Sweeney, Professor of Physics at the University of Surrey, said that “the University of Surrey has a long track record in photonics and space research and brings unique expertise in both high power lasers and photovoltaics technologies. We have many years of experience in optical wireless power and are delighted to work with Space Power to help develop such technologies for space-based applications.”

Keval Dattani, Director of Space Power said that “the SPRINT project is an important development from our feasibility study with the University of Surrey that enables us to approach customers with confidence and demonstrate the improved efficiencies available by using auxiliary power systems. By focusing on light optics and power beaming, we are looking to increase small satellite operating efficiencies by a factor of between 2X-5X.”

“We have seen the benefits of powering satellites by laser which enables smaller satellites, simpler systems and fewer resources—whilst performing more work to help us understand our planet better. For us, this is a neat solution with long term benefits, not least for lunar outposts and asteroid mining but back here on earth too.”


Explore furtherImage: Dedicated satellites to collect solar energy


Provided by University of Surrey

https://www.mindbodygreen.com/articles/how-to-fall-back-asleep-after-2-m-wake-up

Woke Up In The Middle Of The Night? This Is The Fastest Way To Fall Back Asleep

mbg Associate Beauty & Wellness EditorBy Jamie Schneider

Image by fizkes / iStockJanuary 28, 2022 — 1:36 AMShare on:

Look, we’ve all been there: You wake up in the middle of the night, and no matter what you do, you cannot fall back asleep. You’ve counted enough sheep to fill up a football field, and, still, you’re tossing and turning under the covers. 

Rather than lying in stillness—or worse, grabbing your phone and scrolling through social media—take a breath and follow this advice from behavioral sleep doctor Shelby Harris, PsyD, DBSM. On the mindbodygreen podcast, she discusses how to (quickly!) fall back asleep after a late night wake-up and what you can do to sleep through the night. 

How to fall back asleep after waking up in the middle of the night. 

First step: Don’t look at the time, especially on your phone. “I really argue that the clock is just going to make it worse for many people,” says Harris. Not only can that blue light exposure keep you awake for longer but depending how late it reads, you might start to feel frustrated about not getting enough sleep—which may only make you feel more wired. 

Rather, she recommends actually getting out of bed: “If you start getting frustrated or your brain’s getting active and you’re not falling back asleep, get up, go sit somewhere, and do something calm and relaxing,” she says. (Like reading, for example.) Don’t bring your phone, as the blue light isn’t doing you any favors—but you can turn on a dim light while you engage in that quiet activity. 

Here’s the thing, though: The activity itself might not make you sleepy. “It’s just really meant to pass the time,” says Harris. You may think that the point of the activity is to lull you back to sleep, but that’s actually a misnomer. “The point of getting out of bed is so that you’re not teaching yourself that the bed is a place to toss and turn,” explains Harris. Read: The more you lie in bed trying to force yourself to feel sleepy, the more your mind may associate your bed with that lack of rest. “The bed becomes more about that than actual sleep,” Harris adds. “So [sitting] on the couch and reading is great, but don’t try to force the sleepiness to happen. You’re just using it as a placeholder, and then get back in bed only when you’re sleepy again.” ADVERTISEMENT

How to prevent those wake-ups. 

If you do wake up in the middle of the night, actually getting out of bed might pay off in the long-run—but Harris has a couple strategies to prevent those 2 a.m. wake-ups in the first place. One of those strategies? Going to bed later. Yes, really: “It’s weird, but if you have trouble falling asleep or even with early morning wakings, I’ll have you go to bed later,” she says. sleep support+The deep and restorative sleep you’ve always dreamt about*★ ★ ★ ★ ★★ ★ ★ ★ ★ (222)SHOP NOW

It’s the same logic as above: If you don’t feel sleepy, lying in bed and trying to force it can actually make matters worse. “I’d rather you go to bed when you’re really sleepy so that you feel more confident in your ability to fall asleep,” she notes. Not only will you likely be able to fall asleep faster, but chances are you’ll have fewer wake-ups, too. And if you need a little extra support to make your eyes feel heavy, experts have a few favorite natural sleep aids and bedtime routines to try as you wind down.*

The takeaway.

Waking up in the middle of the night can be frustrating, but don’t put too much pressure on yourself to fall asleep instantly—that may only exacerbate the issue. As Harris tells all of her clients who struggle with sleep: “If I don’t sleep well tonight, I’ll sleep well tomorrow. And if not tomorrow, definitely by the third day.” 

https://spectrum.ieee.org/packetized-power-grid

HOW TO PREVENT BLACKOUTS BY PACKETIZING THE POWER GRID

The rules of the Internet can also balance electricity supply and demand

MADS ALMASSALKHIJEFF FROLIKPAUL HINES1 HOUR AGO13 MIN READ

How to Prevent Blackouts by Packetizing the Power Grid

DAN PAGE

BAD THINGS HAPPEN when demand outstrips supply. We learned that lesson too well at the start of the pandemic, when demand for toilet paper, disinfecting wipes, masks, and ventilators outstripped the available supply. Today, chip shortages continue to disrupt the consumer electronics, automobile, and other sectors. Clearly, balancing the supply and demand of goods is critical for a stable, normal, functional society.

That need for balance is true of electric power grids, too. We got a heartrending reminder of this fact in February 2021, when Texas experienced an unprecedented and deadly winter freeze. Spiking demand for electric heat collided with supply problems created by frozen natural-gas equipment and below-average wind-power production. The resulting imbalance left more than 2 million households without power for days, caused at least 210 deaths, and led to economic losses of up to US $130 billion.

Similar mismatches in supply and demand contributed to massive cascading blackouts in August 2003 in the northeastern United States and Canada, in July 2012 in India, and in March 2019 in Venezuela.

The situation is unlikely to get better anytime soon, for three reasons. First, as countries everywhere move to decarbonize, the electrification of transportation, heating, and other sectors will cause electricity demand to soar. Second, conventional coal and nuclear plants are being retired for economic and policy reasons, removing stable sources from the grid. And third, while wind and solar-photovoltaic systems are great for the climate and are the fastest-growing sources of electric generation, the variability of their output begets new challenges for balancing the grid.

So how can grid operators keep supply and demand balanced, even as they shut down old, dirty power plants, ramp up variable generation, and add new electric loads? There are a few possibilities. One is to do a modernized version of what we have done in the past: Build giant, centralized infrastructure. That would mean installing vast amounts of energy storage, such as grid-scale batteries and pumped-hydro facilities, to hold the excess renewable power being generated, and interconnecting that storage with high-voltage transmission lines, so that supply can meet demand across the grid. China is a leader in this approach, but it’s incredibly expensive and requires an enormous amount of political will.https://flo.uri.sh/visualisation/8315782/embed?auto=1

Packetized energy management (PEM) allows the power grid to flexibly handle a varying supply of renewable energy. In a simulation, the aggregated load from 1,000 electric water heaters [solid orange line] almost exactly matches renewable energy supply [dashed gold line] after packetized control is switched on [vertical dotted line]. The brown line shows the uncoordinated load from 1,000 electric water heaters for reference. The black-and-white images show patterns of off and on for a set of 10 water heaters controlled by conventional thermostats and by PEM-enabled thermostats.

We think there’s a better way. Instead of drastically scaling up power-grid infrastructure, our work at the University of Vermont has focused on how to coordinate demand in real time to match the increasingly variable supply. Our technology takes two ideas that make the Internet fundamentally scalable—packetization and randomization—and uses them to create a system that can coordinate distributed energy. Those two data-communication concepts allow millions of users and billions of devices to connect to the Internet without any centralized scheduling or control. The same basic ideas could work on the electrical grid, too. Using low-bandwidth connectivity and small controllers running simple algorithms, millions of electrical devices could be used to balance the flow of electricity in the local grid. Here’s how.

Electricity demand on the grid comes from billions of electrical loads. These can be grouped into two broad categories: commercial and industrial loads, and residential loads. Of the two, residential loads are far more dispersed. In the United States alone, there are over 120 million households, which collectively account for about 40 percent of annual electricity consumption. But residential customers generally don’t think about optimizing their own electricity loads as they go about their day. For simplicity’s sake, let’s call these residential loads “devices,” which can range from lights and televisions to water heaters and air conditioners.

The latter devices, along with electric-vehicle chargers and pool pumps, are not only large electric loads (that is, greater than a 1-kilowatt rating), but they’re also flexible. Unlike lighting or a TV, which you want to go on the instant you throw the switch, a flexible device can defer consumption and operate whenever—as long as there’s hot water for your shower, your pool is clean, your EV has enough charge, and the indoor temperature is comfortable.

Collectively, there is a lot of flexibility in residential electricity loads that could be used to help balance variable supply. For example, if every household in California and New York had just one device that could consume power flexibly, at any time, the power grid would have the equivalent of around 15 gigawatts of additional capacity, which is more than 10 times the amount currently available from utility-scale battery storage in these states.

Here’s what flexibility means when it comes to operating, say, a residential electric water heater. While heating water, a typical unit draws about 4.5 kilowatts. Over the course of a normal day, the appliance is on about a tenth of the time, using about 10.8 kilowatt-hours. To the homeowner, the daily cost of operating the water heater is less than US $2 (assuming a rate of about 15¢ per kWh). But to the utility, the cost of electricity is highly variable, from a nominal 4¢ per kWh to over $100 per kWh during annual peak periods. Sometimes, the cost is even negative: When there is too much power available from wind or solar plants, grid operators effectively pay utilities to consume the excess.

Three line graphs show variations in electricity supply and demand over time and how the use of the Internet concepts of packetization and randomization leads to alignment of the supply and demand curves.

Electricity supply and demand can sometimes diverge in dramatic ways. Packetization and randomization of flexible electricity loads allow demand to match the available supply.UNIVERSITY OF VERMONT

To reduce demand during peak periods, utilities have long offered demand-response programs that allow them to turn off customers’ water heaters, air conditioners, and other loads on a fixed schedule—say, 4 p.m. to 9 p.m. during the summer, when usage is historically high. If all we want to do is reduce load at such times, that approach works reasonably well.

However, if our objective is to balance the grid in real time, as renewable generation ebbs and flows unpredictably with the wind and sun, then operating devices according to a fixed schedule that’s based on past behavior won’t suffice. We need a more responsive approach, one that goes beyond just reducing peak demand and provides additional benefits that improve grid reliability, such as price responsiveness, renewable smoothing, and frequency regulation.

How can grid operators coordinate many distributed, flexible kilowatt-scale devices, each with its own specific needs and requirements, to deliver an aggregate gigawatt-scale grid resource that is responsive to a highly variable supply? In pondering this question, we found inspiration in another domain: digital communication systems.

Digital systems represent your voice, an email, or a video clip as a sequence of bits. When this data is sent across a channel, it’s broken into packets. Then each packet is independently routed through the network to the intended destination. Once all of the packets have arrived, the data is reconstructed into its original form.

How is this analogous to our problem? Millions of people and billions of devices use the Internet every day. Users have their individual devices, needs, and usage patterns—which we can think of as demand—while the network itself has dynamics associated with its bandwidth—its supply, in other words. Yet, demand and supply on the Internet are matched in real time without any centralized scheduler. Likewise, billions of electrical devices, each with its own dynamics, are connecting to the power grid, whose supply is becoming, as we noted, increasingly variable.

Recognizing this similarity, we developed a technology called packetized energy management (PEM) to coordinate the energy usage of flexible devices. Coauthor Hines has a longstanding interest in power-system reliability and had been researching how transmission-line failures can lead to cascading outages and systemic blackouts. Meanwhile, Frolik, whose background is in communication systems, had been working on algorithms to dynamically coordinate data communications from wireless sensors in a way that used very little energy. Through a chance discussion, we realized our intersecting interests and began working to see how these algorithms might be applied to the problem of EV charging.

A Packetized Grid for the Developing World

Packetized energy management (PEM) is a way to balance the power grid and make it more reliable while also maximizing the use of renewable energy and avoiding the installation of massive amounts of energy storage or other expensive infrastructure.

In industrialized parts of the world, PEM assumes that an electrical device such as a water heater or electric-vehicle charger will determine its own need for energy, then use this need to determine if it should request an energy packet from a cloud-based coordinator. When a packet of energy is requested, the coordinator responds based on grid or market conditions by either accepting or denying the request. This approach clearly requires an active, bidirectional communication link, which can be readily accomplished through a homeowner’s Wi-Fi network, a cellular link, or more advanced communication solutions, such as LoRa (long-range, low-power) technologies.

In many parts of the world, though, such bidirectional links are unavailable, and yet coordinating electricity loads is still critical, to prevent power outages caused by unmanaged overloading. Take crop irrigation in developing countries. Ideally, you’d want farms in these areas to use greener, grid-connected electric pumps rather than diesel ones. But the power grid may not always be robust enough to handle these large and highly distributed loads, which can exceed 10 kilowatts.

How could these loads be coordinated using the PEM concept? Over the course of perhaps several days, a pump typically runs for a certain number of hours rather than continuously. Even if bidirectional communication is unavailable, we can still implement load coordination by having the pump’s controller randomly “listen” for a broadcast signal from a coordinator. The coordinator would be monitoring grid conditions and possibly the water flow in irrigation canals and then broadcasting over low-frequency AM bands a traffic-light–like signal indicating a simple “yes” or “no”—“yes” indicates it’s okay to turn on the pump and consume electricity, “no” means do not turn on, do not consume electricity. The pump controller would wake up at its own regular random interval, determine its need for energy, and listen to the broadcast. If the broadcast is a “yes,” the pump operates for an appropriate duration. If a “no” is received, the pump does not turn on, and the process repeats at the next wake-up interval.

If all the pumps in a particular region operate using PEM rules, then the available electricity and water resources would be shared fairly. Furthermore, the randomization in electricity usage would prevent every pump from turning on or off simultaneously, as would occur using traditional demand-response broadcasts.

By coordinating these large but flexible loads, PEM would also help ensure the availability of electricity for lighting and other nonflexible loads, thus improving the quality of life for people in these regions.

— M.A., J.F. & P.H.

Shortly thereafter, Almassalkhi joined our department and recognized that what we were working on had greater potential. In 2015, he wrote a winning proposal to ARPA-E’s NODES program—that’s the U.S. Department of Energy’s Advanced Research Projects Agency–Energy’s Network Optimized Distributed Energy Systems program. The funding allowed us to further develop the PEM approach.

Let’s return to the electric water heater. Under conventional operation, the water heater is controlled by its thermostat. The unit turns on when the water temperature hits a lower limit and operates continuously (at 4.5 kW) for 20 to 30 minutes, until the water temperature reaches an upper limit. The pair of black-and-white graphs at the bottom of “Matching Electricity Demand to Supply” shows the on and off patterns of 10 heaters—black for off and white for on.

Under PEM, each load operates independently and according to simple rules. Instead of heating only when the water temperature reaches its lower limit, a water heater will periodically request to consume a “packet” of energy, where a packet is defined as consuming power for just a short period of time—say, 5 minutes. The coordinator (in our case, a cloud-based platform) approves or denies such packet requests based on a target signal that reflects grid conditions, such as the availability of renewable energy, the price of electricity, and so on. The top graph in “Matching Electricity Demand to Supply” shows how PEM consumption closely follows a target signal based on the supply of renewable energy.

To ensure that devices with a greater need for energy are more likely to have their requests approved, each device adjusts the rate of its requests based on its needs. When the water is less hot, a water heater requests more often. When the water is hotter, it requests less often. The system thus dynamically prioritizes devices in a fully decentralized way, as the probabilities of making packet requests are proportional to the devices’ need for energy. The PEM coordinator can then focus on managing incoming packet requests to actively shape the total load from many packetized devices, without the need to centrally optimize the behavior of each device. From the customer’s perspective, nothing about the water heater has changed, as these requests occur entirely in the background.

These same concepts can be applied to a wide range of energy-hungry devices. For example, an EV charger or a residential battery system can compare the battery’s current state of charge to its desired value—equivalent to its need for energy—translate this into a request probability, and then send a request to the PEM coordinator, which either accepts or denies the request based on real-time grid or market conditions. Depending on those conditions, it might take somewhat longer for a battery to fully charge, but the customer shouldn’t be inconvenienced.

In this way, flexible energy devices communicate using the common, simple language of energy-packet requests. As a result, the coordinator is agnostic to the type of device making the request. This device-agnostic coordination is similar to net neutrality in data communications. In general, the Internet doesn’t care if your packet carries voice, video, or text data. Similarly, PEM doesn’t care if the device requesting a packet is a water heater, a pool pump, or an EV charger, so it can readily coordinate a heterogeneous mix of kilowatt-scale devices.

An electrical gadget on top of a water heater has a display showing the water temperature of 126 degrees.

This controller connects to a residential electric water heater and uses simple algorithms to request “packets” of energy from a cloud-based coordinator to maintain a suitable temperature.PACKETIZED ENERGY TECHNOLOGIES

Right now, bottom-up, device-driven technologies like PEM are not widely deployed. Instead, most of today’s demand-response technologies take a top-down approach, in which the coordinator broadcasts a control signal to all devices, telling them what to do. But if every device is told to do the same thing at the same time, things can go wrong very quickly, as the power consumption of the devices becomes synchronized. Imagine the effect of millions of air conditioners, water heaters, and EV chargers turning on (or off) at once. That would represent gigawatt spikes—as if a large nuclear power plant were turning on or off with the flip of a switch. A spike that large could cause the grid to become unstable, which could trigger a cascading blackout. That’s why most utilities today split devices into groups to limit spikes to the order of tens of megawatts. However, actively managing these different groups beyond a few annual peak events is a challenge for top-down approaches.

But if each device works to meet its own unique need for energy, then packet requests (and resulting power use) are inherently randomized, and as a result, synchronization becomes much less of a concern.

The top-down approach also makes it difficult to take into account customer preferences for hot water, charged cars, and cool homes on hot days. If we are going to coordinate energy devices to make the grid work better, we need to make sure that we do it in a way that is essentially unnoticeable and automatic for the consumer.

Now, consider how PEM accounts for an individual customer’s preferences in the case of the water heater. If the water temperature drops below its lower limit and the heater isn’t already consuming a packet of energy, it can temporarily “opt out” of the PEM scheme and turn on until the temperature recovers. The water heater will inform the PEM coordinator of this change in its operating mode, and the coordinator will simply update its accounting of the aggregate demand. The impact of this single load on the total is small, but for the customer, having the guarantee of hot water when needed builds trust and ensures ongoing participation.

PEM’s device-driven approach also makes things easier for the coordinator because it doesn’t need to centrally monitor or model each device to develop an optimized schedule. The coordinator only needs to monitor grid and market conditions, reply to the live stream of incoming packet requests, and keep a record of the “opted out” devices—the coordinator manages just three set of numbers, in other words.

To increase the impact of our work, we decided to commercialize PEM in parallel with our research and founded Packetized Energy in 2016. The company has deployed its cloud-based energy coordination platform in several utility-sponsored pilot projects in the United States and Canada. These projects each started by retrofitting existing electric water heaters with a smart thermostat that we designed, developed, and had UL-certified. We have also demonstrated PEM with EV chargers, residential batteries, and thermostats. Our first customer was our hometown Vermont utility, Burlington Electric Department. In 2018, BED began the nation’s first 100 percent renewable-powered water heater program, which has now expanded to include EV chargers.https://flo.uri.sh/story/1116253/embed?auto=1

On 6 December 2021 over the course of more than 2 hours, a load of 208 residential water heaters was rapidly coordinated from 25 kilowatts to 80 kW—from about half the baseline load to about twice that load.

Our projects have yielded some promising results. “A Real-Time Demo of Load Coordination” shows how PEM coordinated the load from 208 residential water heaters in Vermont and South Carolina over a typical 2-hour period. The heaters [orange line] followed a rapidly changing target [black line] that ranged from about half the nominal load to about twice that load [red line].

As systems scale to thousands of packetized devices, the asynchronous packet requests will appear as a continuous signal. Our simulations show that at this scale, any gaps between the target and the actual will disappear. The aggregate load is at least as responsive as the reaction times of a modern natural-gas power plant—and you don’t have the expense of building, operating, and maintaining the physical plant.

Falling costs for sensors and microcontrollers are leading to the rapid growth of the Internet of Things. Combined with smart home technology, IoT makes it possible to imagine a world in which all energy devices—loads, energy storage, and generators—are actively coordinated to keep the grid stable and take full advantage of renewable energy. But challenges do lie ahead.

First, there are few standards today to guide manufacturers interested in device-level coordination and no real incentives for them to adopt any particular approach. This has resulted in a proliferation of proprietary technologies that address the same fundamental problem. Here, again, we can draw inspiration from the Internet: Proprietary solutions are unlikely to scale up to the point of addressing the energy problems at hand. New initiatives driven by industry such as EcoPort (formerly CTA 2045) and Matter (formerly Connected Home over IP) hold promise for secure, low-latency communications with devices made by different manufacturers. IEEE technical committees, working groups, and task forces are also playing supporting roles, such as the IEEE Power and Energy Society’s Smart Buildings, Loads, and Customer Systems technical committee. We hope that in the future these efforts will seamlessly support the device-driven “packetization” concepts described here, and not just serve traditional top-down communication and control architectures.

What’s also needed are incentives for electricity customers to shift their energy usage. Right now, the daily cost of electricity for a residential water heater is about the same, regardless of when the heater turns on. There’s no financial benefit to the homeowner to run the water heater when renewable energy supply is high or the wholesale electricity price is low. Regulators, utilities, and others will need to rethink and redesign incentives and flexible-demand programs to ensure that the contributions and rewards are fair and equitable across all customers. They will also need to educate consumers about how the program works.

There is plenty of precedent for solving such technical and policy challenges. A public system that is fair, responsive, accessible, reliable, resilient, and scalable sounds a lot like the Internet. Packetized energy management, with its core design modeled on the Internet’s data communications, would deliver those same important benefits. As we transition to a new kind of grid, based on distributed and renewable generation, we’ll need new technology and new paradigms. Fortunately, we have a time-tested model that is showing us the way. 

This article appears in the February 2022 print issue as “Packetizing the Power Grid.”

https://techxplore.com/news/2022-01-team-algorithm.html


JANUARY 25, 2022

Team develops new algorithm to calculate the best shapes for things to come

by University of Michigan

algorithm
Credit: CC0 Public Domain

Maximizing the performance and efficiency of structures—everything from bridges to computer components—can be achieved by design with a new algorithm developed by researchers at the University of Michigan and Northeastern University.

It’s an advancement likely to benefit a host of industries where costly and time-consuming trial-and-error testing is necessary to determine the optimal design. As an example, look at the current U.S. infrastructure challenge—a looming $2.5 trillion backlog that will need to be addressed with taxpayer dollars.

Planners searching for the best way to design a new bridge need to answer a string of key questions. How many pillars are needed? What diameter do those pillars need to be? What should the radius of the bridge’s arch be? The new algorithm can determine the combination that gives the highest load capacity with lowest cost.

The team tested their algorithm in four optimization scenarios: Designing structures to maximize their stiffness for carrying a given load; designing the shape of fluid channels to minimize pressure loss; creating shapes for heat transfer enhancement; and minimizing the material of complex trusses for load bearing. The new algorithm reduced the computational time needed to reach the best solution by roughly 100 to 100,000 times over traditional approaches. In addition, it outperformed all other state-of-the-art algorithms.

“It’s a tool with the potential to influence many industries—clean energy, aviation, electric vehicles, energy efficient buildings,” said Wei Lu, U-M professor of mechanical engineering and corresponding author of the study in Nature Communications.

The new algorithm plays in a space called topology optimization—how best to distribute materials within a given space to get the desired results.

Team develops new algorithm to calculate the best shapes for things to come
At times, the optimal shape can be counterintuitive. The design on the left, produced by the new algorithm, represents the best design among the three for copper structures in wax that remove heat from the copper pipe. The gradient-optimized design in the middle and the approximated design on the right would be slower at drawing heat out of the pipes. Credit: Changyu Deng, Wei Lu Research Group

“If you really want to design something rationally, you’re talking about a large number of calculations, and doing those can be difficult with time and cost considerations,” Lu said. “Our algorithm can reduce the calculations and facilitate the optimization process.”

Heat sinks—components that transfer heat from a computer’s central processing unit to the outside air—represent another shape that can be optimized via U-M’s algorithm. Traditional heat sinks have been designed with multiple parallel fins along surfaces. Topology optimization shows a more efficient heat transfer is achieved when those fins are shaped like trees.https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-0536483524803400&output=html&h=280&slotname=8459827939&adk=582299054&adf=2318121742&pi=t.ma~as.8459827939&w=750&fwrn=4&fwrnh=100&lmt=1643488089&rafmt=1&psa=1&format=750×280&url=https%3A%2F%2Ftechxplore.com%2Fnews%2F2022-01-team-algorithm.html&flash=0&fwr=0&rpe=1&resp_fmts=3&wgl=1&uach=WyJtYWNPUyIsIjEwLjExLjYiLCJ4ODYiLCIiLCI5Ny4wLjQ2OTIuOTkiLFtdLG51bGwsbnVsbCwiNjQiXQ..&dt=1643488089015&bpp=13&bdt=3923&idt=774&shv=r20220126&mjsv=m202201200301&ptt=9&saldr=aa&abxe=1&cookie=ID%3D3e601f936282f656-22362713e1cc00be%3AT%3D1639362255%3AS%3DALNI_MYO_Ci2uEL3onPR8J02ZYsa7bookA&correlator=6822488313036&frm=20&pv=2&ga_vid=836539722.1605309732&ga_sid=1643488090&ga_hid=885388411&ga_fc=1&ga_wpids=UA-73855-17&u_tz=-480&u_his=1&u_h=1050&u_w=1680&u_ah=980&u_aw=1680&u_cd=24&u_sd=1&dmc=2&adx=334&ady=2298&biw=1678&bih=900&scr_x=0&scr_y=0&eid=31064202%2C44750773%2C182982100%2C182982300%2C31063221%2C21067496%2C31062891&oid=2&pvsid=1777259670444691&pem=171&tmod=1032227081&uas=0&nvt=1&ref=https%3A%2F%2Fnews.google.com%2F&eae=0&fc=896&brdim=1%2C23%2C1%2C23%2C1680%2C23%2C1678%2C980%2C1678%2C900&vis=1&rsz=%7C%7ClEbr%7C&abl=CS&pfx=0&fu=128&bc=31&ifi=1&uci=a!1&btvi=1&fsb=1&xpc=UXgjFP9yNn&p=https%3A//techxplore.com&dtd=819

“With optimal shapes in mind, engineers find a good trade off between cooling speed and manufacturing cost,” said Changyu Deng, U-M graduate student research assistant in mechanical engineering.

The new algorithm uses what are known as “nongradient” optimization methods, an advanced approach to optimization problems. Optimization can be imagined as turning all of the variables in a design into a mountainous landscape in which the best design is in the bottom of the lowest valley. Gradient-based optimization slides downhill until it reaches a valley—while easy and fast, it won’t necessarily find the lowest valley.

Nongradient methods jump from mountain to mountain, providing a better sense of the terrain. While versatile, they take a lot of computing resources and need to be accelerated to be feasible for applications. The new algorithm does this by first approximating the system by machine learning and then using a self-directed learning scheme to leap from mountain to mountain in search of that lowest valley.

“This research can dramatically accelerate non-gradient optimizers for topology optimization to make nongradient methods feasible. In this way, more complicated problems can be tackled,” Deng said.

Beyond infrastructure and cost issues, the algorithm can be utilized for any shape optimization projects where maximizing performance is the goal. Future applications include optimizing battery electrode morphology, vehicle frames and shells, structures of buildings, and even more complex optimization problems outside of topology optimization.


Explore furtherAccidental discovery may revolutionize method for designing structures


More information: Changyu Deng et al, Self-directed online machine learning for topology optimization, Nature Communications (2022). DOI: 10.1038/s41467-021-27713-7Journal information:Nature CommunicationsProvided by University of Michigan

https://phys.org/news/2022-01-team-microscope-image-microbes-soil.html


JANUARY 27, 2022

Team develops microscope to image microbes in soil and plants at micrometer scale

by Anne M Stark, Lawrence Livermore National Laboratory

Capturing microbes in soil and plants
LLNL researchers used multiple imaging modes to generate contrast and chemical information for soil microorganisms in roots, minerals and plants like switchgrass, shown here. Credit: USDA

Lawrence Livermore National Laboratory (LLNL) scientists have developed a custom microscope to image microbes in soil and plants at the micrometer scale.

Live imaging of microbes in soil would help scientists understand how soil microbial processes occur on the scale of micrometers, where microbial cells interact with minerals, organic matter, plant roots and other microorganisms. Because the soil environment is both heterogeneous and dynamic, these interactions may vary substantially within a small area and over short timescales.

Imaging biogeochemical interactions in complex microbial systems, such as those at the soil-root interface, is crucial to studies of climate, agriculture and environmental health but complicated by the three-dimensional (3D) collocation of materials with a wide range of optical properties.

Microaggregates (<250 μm) formed and composed from microbial decomposition of soil organic matter are inhabited by distinct microbial communities potentially leading to highly divergent metabolisms and functions within the volume of soil that is typically sampled for molecular, genomic or physiochemical analyses. In addition to microbial heterogeneity, microorganisms also respond rapidly to changes in subsurface temperature, moisture, nutrient availability, signaling molecules and other conditions.

Researchers have pursued a large range of imaging techniques in efforts to understand the spatial and temporal aspects of these processes, but the combined characteristics of soil and microbes, including physical properties and length scales, continue to make monitoring and characterizing microbe-plant-soil interactions over time a significant challenge.

In recent years, some of the most impressive advances in imaging in soil have been achieved with X-ray computed tomography and magnetic resonance imaging. These modes are notable because they are capable of imaging deep into soil, and therefore they can provide unprecedented insight into plant root architecture, soil structure and even water movement. They have even been used for live imaging.

But these same methods are unable to image microbes such as individual hyphae and bacteria because of contrast or resolution limitations. The LLNL researchers turned to optical methods—imaging with light in the ultraviolet, visible and infrared spectrum—that allowed them to image microbes in soil and plants.

“We wanted to image in the optical range because it is convenient, gentle and fast, but we knew that we needed to take a new approach to generating contrast to be able to image microbes in natural matrices,” said LLNL chemist Peter Weber, the project lead.

The team developed a label-free multiphoton nonlinear optics approach using multiple imaging modes to generate contrast and chemical information for soil microorganisms in roots and minerals.

“Multiphoton microscopy has multiple advantages over single-photon methods, like standard fluorescence imaging and Raman,” said LLNL physicist Janghyuk Lee, the lead author. “The No. 1 advantage is that it provides high signal with low risk of sample damage.”

The approach the LLNL team developed enables a strong signal for general microbe, plant and mineral imaging; high contrast, label-free chemical imaging that can target diagnostic biomolecules and minerals; very strong signals from specific minerals and some biomolecules; and higher information content, deeper penetration, less scattering, and less photodamage compared to confocal microscopy. The research appears in the journal Environmental Science and Technology.

Using this instrument, the team imaged symbiotic arbuscular mycorrhizal fungi structures within unstained plant roots in 3D to 60 μm depth. High-quality imaging was possible at up to 30 μm depth in a clay particle matrix and at 15 μm in a complex soil preparation.

“Our next step is to integrate adaptive optics into this system and attempt to image deeper,” said LLNL physicist Ted Laurence, a senior author on the study.

The technique allowed the researchers to identify previously unknown lipid droplets in the symbiotic fungus, Serendipita bescii. They also visualized unstained putative bacteria associated with the roots of Brachypodium distachyon in a soil microcosm.

“Our results show that this multimodal approach holds significant promise for rhizosphere and soil science research,” said LLNL physicist Sonny Ly, a senior author on the study. “We are particularly excited about the microscope’s potential for chemical imaging, such as identifying lipids in the fungus.”

Other LLNL scientists include Rachel Hestrin, Erin Nuccio, Jennifer Pett-Ridge, Keith Morrison, Christina Ramon and Ty Samo.


Explore furtherStudy finds fertilization affects soil microbial biomass and residue distribution by changing root biomass


More information: Janghyuk Lee et al, Label-Free Multiphoton Imaging of Microbes in Root, Mineral, and Soil Matrices with Time-Gated Coherent Raman and Fluorescence Lifetime Imaging, Environmental Science & Technology (2022). DOI: 10.1021/acs.est.1c05818Journal information:Environmental Science and Technology Environmental Science & TechnologyProvided by Lawrence Livermore National Laboratory

https://fortune.com/2022/01/29/neuralink-elon-musk-brain-implant-startup-high-pressure-workplace/


Neuralink former employees say Elon Musk applies relentless pressure and instills a culture of blame

BY JEREMY KAHN AND JONATHAN VANIAN

January 29, 2022 7:00 AM PST

Elon Musk has always said that Neuralink, the company he created in 2016 to build brain-computer interfaces, would do amazing things: Eventually, he says, it aims to allow humans to interact seamlessly with advanced artificial intelligence through thought alone. Along the way, it would help to cure people with spinal cord injuries and brain disorders ranging from Parkinson’s to schizophrenia.

Now the company is approaching a key test: a human clinical trial of its brain-computer interface (BCI). In December, Musk told a conference audience that “we hope to have this in our first humans” in 2022. In January, the company posted a job listing for a clinical trial director, an indication that it may be on track to meet Musk’s suggested timeline.

But even as it approaches this milestone, the company has been plagued by internal turmoil, including the loss of key members of the company’s founding team, according to a half-dozen former employees interviewed by Fortune—in no small part because of the pressure-cooker culture Musk has created.

Most of these former employees requested anonymity, concerned about violating nondisclosure agreements and the possibility of drawing Musk’s ire. Musk and Neuralink did not respond to multiple requests for comment for this story.

Musk has put the startup under unrelenting pressure to meet unrealistic timelines, these former employees say. “There was this top-down dissatisfaction with the pace of progress even though we were moving at unprecedented speeds,” says one former member of Neuralink’s technical staff, who worked at the company in 2019. “Still Elon was not satisfied.” Multiple staffers say company policy, dictated by Musk, forbade employees from faulting outside suppliers or vendors for a delay; the person who managed that relationship had to take responsibility for missed deadlines, even those outside their control.

Employees were constantly anxious about angering Musk by not meeting his ambitious schedules, former employees said. “Everyone in that whole empire is just driven by fear,” another former employee says, referring to Musk’s businesses, including Neuralink. This culture of blame and fear, former employees said, contributed to a high rate of turnover. Of the eight scientists Musk brought in to help establish Neuralink with him, only two, Dongjin Seo and Paul Merolla, remain at the company.

Read the Fortune feature. Inside Neuralink, Elon Musk’s secretive startup: A culture of blame, impossible deadlines, and a missing CEO.

The pressure could be particularly problematic because of the multiple tough scientific and engineering challenges Neuralink was tackling. Tim Hanson, a scientist at the Janelia Research Campus that is part of the Howard Hughes Medical Institute in Ashburn, Va., was part of Neuralink’s founding team, working on its surgical robot as well as animal studies using brain-computer interfaces. “There’s this mismatch,” he says, between the speed at which engineering obstacles can be solved and the more deliberative pace of fundamental science. “Basic science is basically slow,” says Hanson, who left the company in 2018. 

Engineers sometimes had to make decisions about issues such as electrode design before relevant data was available from scientific teams working on animal research. Animal research can take months and years; the engineers were under pressure to act in days and weeks. There were also delays caused by Neuralink’s need to fabricate custom-designed computer chips, one former employee said. Musk, meanwhile, wanted to move into human implantation as fast as possible.

Tensions of this sort aren’t atypical for startups working at the cutting edge of a new technology, and similar pressure from Musk has been chronicled at his other companies, including SpaceX and Tesla. But the issues certainly add to doubts about whether Neuralink will be able to live up to the hype Musk’s has created with his breezy pronouncements about what its technology will be able to do. 

https://www.theregister.com/2022/01/28/quantum_technique_scheduling_chats_with/


Not Azure thing: Using MS’s Quantum to schedule chats with spacecraft on the DSN

Oh boy

Richard SpeedFri 28 Jan 2022 // 18:13 UTC


5 

While Microsoft’s Azure Quantum continues to hover between vapourware and hardware – a state of quantum if you will – NASA boffins have been putting tech inspired by the research to work in spacecraft communications.

As for those that think the Moon landings never happened, just wait until you hear about scalable quantum computing.

In this instance, the fabled hardware is not being put to use by engineers and scientists at NASA’s Jet Propulsion Laboratory. Instead, the team is looking for ways of optimising communication with missions via the very finite resource of the Deep Space Network (DSN).

The DSN is a network of large radio antennas spread around the Earth and is used to keep in touch with spacecraft as far away as the Voyagers as well as new kid on the block, the James Webb Space Telescope.

The problem is one of scheduling: all the missions need to communicate and, according to Microsoft, this results in “several hundred weekly requests when each spacecraft is visible to the antenna.”

The optimisation problem is not an unusual one, and one faced by many industries. Sure, one can throw algorithms utilising methods such as the Monte Carlo simulation at the problem, but the possibilities afforded by quantum techniques are intriguing (even if one must still turn to traditional hardware in absence of much in the way of scalable real-world quantum chippery.

“Quantum computers,” said Microsoft in a lengthy set of case studies [PDF], “can naturally represent random distributions as quantum states, and therefore have the potential to provide better solutions than today’s classical optimisation algorithms.”

As for JPL, when the Microsoft team started, it took a run-time of two hours to produce a schedule. Using Azure Quantum brought the time down to 16 minutes, and a custom solution brought things down further to approximately two minutes. Being able to spit out multiple candidate schedules makes for more agile mission planning.

While Azure Quantum remains resolutely in preview and scalable quantum computers have yet to trouble reality, it’s heartening to see some of the results of all that research being used to solve problems both on and off world.

https://www.cp24.com/news/what-s-known-about-stealth-version-of-omicron-1.5757272?cache=%2Ffeed%2F%3FclipId%3D68597

What’s known about ‘stealth’ version of Omicron?

https://imasdk.googleapis.com/js/core/bridge3.496.0_en.html#goog_1667973391Volume 90% Research on Omicron subvariant underway

 Infectious diseases specialist Dr. Isaac Bogoch says research into how effective vaccines are against the subvariant is underway.New Omicron subvariant ‘BA.2’ found

 Dr. Abdu Sharkawy says it’s premature to suggest the new Omicron subvariant is any better at evading the neutralizing effects of vaccines.

SHARE:

https://platform.twitter.com/widgets/tweet_button.8f764d5bd2778f88121d31d7d8d8e1e3.en.html#dnt=false&id=twitter-widget-1&lang=en&original_referer=https%3A%2F%2Fwww.cp24.com%2Fnews%2Fwhat-s-known-about-stealth-version-of-omicron-1.5757272%3Fcache%3D%252Ffeed%252F%253FclipId%253D68597&size=m&text=What%27s%20known%20about%20%27stealth%27%20version%20of%20Omicron%3F&time=1643486148525&type=share&url=https%3A%2F%2Fwww.cp24.com%2Fnews%2Fwhat-s-known-about-stealth-version-of-omicron-1.5757272%3Fcache%3Dsazhusyrecmk&via=cp24Reddithttps://www.facebook.com/plugins/share_button.php?app_id=117341078420651&channel=https%3A%2F%2Fstaticxx.facebook.com%2Fx%2Fconnect%2Fxd_arbiter%2F%3Fversion%3D46%23cb%3Dfe5e7a970b24c%26domain%3Dwww.cp24.com%26is_canvas%3Dfalse%26origin%3Dhttps%253A%252F%252Fwww.cp24.com%252Ff349fdc1ea18b8c%26relation%3Dparent.parent&container_width=64&href=https%3A%2F%2Fwww.cp24.com%2Fnews%2Fwhat-s-known-about-stealth-version-of-omicron-1.5757272%3Fcache%3Dsazhusyrecmk&layout=button_count&locale=en_US&sdk=joey&size=small

TEXT:

Laura Ungar, The Associated Press
Published Thursday, January 27, 2022 3:38PM EST
Last Updated Thursday, January 27, 2022 3:38PM EST

Scientists and health officials around the world are keeping their eyes on a descendant of the omicron variant that has been found in at least 40 countries, including the United States and Canada.

This version of the coronavirus, which scientists call BA.2, is widely considered stealthier than the original version of omicron because particular genetic traits make it somewhat harder to detect. Some scientists worry it could also be more contagious.

But they say there’s a lot they still don’t know about it, including whether it evades vaccines better or causes more severe disease.

WHERE HAS IT SPREAD?

Since mid-November, more than three dozen countries have uploaded nearly 15,000 genetic sequences of BA.2 to GISAID, a global platform for sharing coronavirus data. As of Tuesday morning, 96 of those sequenced cases came from the U.S.

“Thus far, we haven’t seen it start to gain ground” in the U.S., said Dr. Wesley Long, a pathologist at Houston Methodist in Texas, which has identified three cases of BA.2.

The mutant appears much more common in Asia and Europe. In Denmark, it made up 45% of all COVID-19 cases in mid-January, up from 20% two weeks earlier, according to Statens Serum Institut, which falls under the Danish Ministry of Health.

WHAT’S KNOWN ABOUT THIS VERSION OF THE VIRUS?

BA.2 has lots of mutations. About 20 of them in the spike protein that studs the outside of the virus are shared with the original omicron. But it also has additional genetic changes not seen in the initial version.

It’s unclear how significant those mutations are, especially in a population that has encountered the original omicron, said Dr. Jeremy Luban, a virologist at the University of Massachusetts Medical School.

For now, the original version, known as BA.1, and BA.2 are considered subsets of omicron. But global health leaders could give it its own Greek letter name if it is deemed a globally significant “variant of concern.”

The quick spread of BA.2 in some places raises concerns it could take off.

“We have some indications that it just may be as contagious or perhaps slightly more contagious than (original) omicron since it’s able to compete with it in some areas,” Long said. “But we don’t necessarily know why that is.”

An initial analysis by scientists in Denmark shows no differences in hospitalizations for BA.2 compared with the original omicron. Scientists there are still looking into this version’s infectiousness and how well current vaccines work against it. It’s also unclear how well treatments will work against it.

Doctors also don’t yet know for sure if someone who’s already had COVID-19 caused by omicron can be sickened again by BA.2. But they’re hopeful, especially that a prior omicron infection might lessen the severity of disease if someone later contracts BA.2.

The two versions of omicron have enough in common that it’s possible that infection with the original mutant “will give you cross-protection against BA.2,” said Dr. Daniel Kuritzkes, an infectious diseases expert at Brigham and Women’s Hospital.

Scientists will be conducting tests to see if antibodies from an infection with the original omicron “are able to neutralize BA.2 in the laboratory and then extrapolate from there,” he said.

HOW CONCERNED ARE HEALTH AGENCIES?

The World Health Organization classifies omicron overall as a variant of concern, its most serious designation of a coronavirus mutant, but it doesn’t single out BA.2 with a designation of its own. Given its rise in some countries, however, the agency says investigations of BA.2 “should be prioritized.”

The UK Health Security Agency, meanwhile, has designated BA.2 a “variant under investigation,” citing the rising numbers found in the U.K. and internationally. Still, the original version of omicron remains dominant in the U.K.

WHY IS IT HARDER TO DETECT?

The original version of omicron had specific genetic features that allowed health officials to rapidly differentiate it from delta using a certain PCR test because of what’s known as “S gene target failure.”

BA.2 doesn’t have this same genetic quirk. So on the test, Long said, BA.2 looks like delta.

“It’s not that the test doesn’t detect it; it’s just that it doesn’t look like omicron,” he said. “Don’t get the impression that `stealth omicron’ means we can’t detect it. All of our PCR tests can still detect it.”

WHAT SHOULD YOU DO TO PROTECT YOURSELF?

Doctors advise the same precautions they have all along: Get vaccinated and follow public health guidance about wearing masks, avoiding crowds and staying home when you’re sick.

“The vaccines are still providing good defense against severe disease, hospitalization and death,” Long said. “Even if you’ve had COVID 19 before – you’ve had a natural infection – the protection from the vaccine is still stronger, longer lasting and actually … does well for people who’ve been previously infected.”

The latest version is another reminder that the pandemic hasn’t ended.

“We all wish that it was over,” Long said, ”but until we get the world vaccinated, we’re going to be at risk of having new variants emerge.”

The Associated Press Health and Science Department receives support from the Howard Hughes Medical Institute’s Department of Science Education. The AP is solely responsible for all content.