https://www.wired.com/story/how-underground-fiber-optics-spy-on-humans-moving-above/


How Underground Fiber Optics Spy on Humans Moving Above

Vibrations from cars and pedestrians create unique signals in cables. Now scientists have used the trick to show how Covid-19 brought life to a halt.

fibers
PHOTOGRAPH: LAWRENCE MANNING/GETTY IMAGES

WHEN LAST SPRING’S lockdown quieted the Penn State campus and surrounding town of State College, a jury-rigged instrument was “listening.” A team of researchers from the university had tapped into an underground telecom fiber optic cable, which runs two and half miles across campus, and turned it into a kind of scientific surveillance device.

By shining a laser through the fiber optics, the scientists could detect vibrations from above ground thanks to the way the cable ever so slightly deformed. As a car rolled across the subterranean cable or a person walked by, the ground would transmit their unique seismic signature. So without visually surveilling the surface, the scientists could paint a detailed portrait of how a once-bustling community ground to a halt, and slowly came back to life as the lockdown eased.

They could tell, for instance, that foot traffic on campus almost disappeared in April following the onset of lockdown, and stayed gone through June. But after initially declining, vehicle traffic began picking up. “You can see people walking is still very minimal compared to the normal days, but the vehicle traffic actually is back to almost normal,” says Penn State seismologist Tieyuan Zhu, lead author on a new paper describing the work in the journal The Seismic Record. “This fiber optic cable actually can distinguish such a subtle signal.”

More specifically, it’s the frequency in the signal. A human footstep generates vibrations with frequencies between 1 and 5 hertz, while car traffic is more like 40 or 50 hertz. Vibrations from construction machinery jump up past 100 hertz.

Fiber optic cables work by perfectly trapping pulses of light and transporting them vast distances as signals. But when a car or person passes overhead, the vibrations introduce a disturbance, or imperfection: a tiny amount of that light scatters back to the source. Because the speed of light is a known quantity, the Penn State researchers could shine a laser through a single fiber optic strand and measure vibrations at different lengths of the cable by calculating the time it took the scattered light to travel. The technique is known in geoscience as distributed acoustic sensing, or DAS.

A traditional seismograph, which registers shaking with the physical movement of its internal parts, only measures activity at one location on Earth. But using this technique, the scientists could sample over 2,000 spots along the 2.5 miles of cable—one every 6 and a half feet—giving them a superfine resolution of activity above ground. They did this between March 2020, when lockdown set in, and June 2020, when businesses in State College had begun reopening.ADVERTISEMENT

Just from those vibrational signals, DAS could show that on the western side of campus, where a new parking garage was under development, there was no industrial activity in April as construction halted. In June, the researchers not only detected the vibrations from the restarted machinery, but could actually pick out the construction vehicles, which hummed along at a lower frequency. Still, they noted, by this time pedestrian activity on campus had barely recovered, even though some pandemic restrictions had eased.

DAS could be a powerful tool to track people’s movement: Instead of sifting through cell phone location data, researchers could instead tap into fiber optic cables to track the passage of pedestrians and cars. But the technology can’t exactly identify a car or person. “You can say if it’s a car, or if it’s a truck, or it’s a bike. But you cannot say, ‘Oh, this is a Nissan Sentra, 2019,’” says Stanford University geophysicist Ariel Lellouch, who uses DAS but wasn’t involved in this study but did peer-review it. “Anonymity of DAS is one of the biggest benefits, actually.”https://d6e9d376c707d53bf21a50aac9b12bd3.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.htmlMost Popular

ADVERTISEMENT

Even if you wanted to track a person as they traveled through a city, they’d have to be continuously walking along the cable you’re monitoring. As soon as they’d veer off-course, you’d lose their seismic signal. “Roughly speaking, if you have a fiber and someone is walking along that fiber—let’s say it’s in the desert—and that’s the only person that’s walking, yes, you can track,” says Lellouch. “But you cannot attribute it to a specific person.” Basically, if you want to track an individual at a distance, you’d be way better off with binoculars or their cell data.

Lately, the use of DAS is booming across the sciences, thanks to “dark fiber.” As the internet grew in the 1990s, telecom companies began laying down a whole lot of fiber optic cable. The cable itself is relatively cheap compared to the labor it takes to dig the holes to lay it, so, in anticipation of the web boom, companies planted more than they needed. Today, much of that fiber is still unused, or “dark,” available for scientists to rent out for experiments.

Its availability depends on the location, though. “So maybe downtown New York, between the stock exchange and New Jersey, there’s a lot of contention for that fiber,” says Rice University geophysicist Jonathan Ajo-Franklin, who wasn’t involved in this new paper but is an associate editor at the journal publishing it. But, he adds, “going across rural Nevada on a long-haul route, maybe there’s extra that you can make use of.”

Get the Latest Covid-19 News

Sign up for our Coronavirus Update newsletter, providing the latest insights on the pandemic, vaccine rollouts, and more.Your emailSUBMITBy signing up you agree to our User Agreement and Privacy Policy & Cookie Statement

Unlike traditional seismometers, this cable is inexpensive and doesn’t require a source of power. With DAS, you just need an “interrogator” device that fires the laser and receives the data coming through the fibers. “So it’s really a great opportunity if you want to acquire this closely spaced data to make measurements of earthquakes or surface waves or urban mobility,” Ajo-Franklin says. For instance, Ajo-Franklin once used a 17-mile stretch of dark fiber near Sacramento to record 7 months of earthquakes, large and small.

Civil engineers are already using DAS to study soil deformation, and biologists are even using offshore fiber optic cables to listen in on whales. (Sound propagates as a vibration, after all.) “It’s just really exploding in terms of the applications,” says Ajo-Franklin. “People are embedding fibers in glaciers and dragging them behind boats in the free water column to make temperature measurements. It’s really kind of an amazing set of technologies.”

So the next time you’re out for a stroll, stop to appreciate the science that may be humming along under your feet. Or, if you’re feeling puckish, jump up and down really hard.

https://www.livescience.com/deleted-covid-19-gene-sequences-found.html


Scientist recovers coronavirus gene sequences secretly deleted last year in Wuhan

By Jeanna Bryner – Live Science Editor-in-Chief 4 days ago

He finds 13 sequences from some of the earliest cases in Wuhan.

The SARS-CoV-2 virus invades human cells by attaching to ACE2 receptors on the surfaces of those cells.

The SARS-CoV-2 virus invades human cells by attaching to ACE2 receptors on the surfaces of those cells. (Image credit: Shutterstock)

Finding the origin story for SARS-CoV-2, the coronavirus responsible for nearly 3.9 million deaths worldwide, has been largely hampered by lack of access to information from China where cases first popped up.

Now, a researcher in Seattle has dug up deleted files from Google Cloud that reveal 13 partial genetic sequences for some of the earliest cases of COVID-19 in Wuhan, Carl Zimmer reported for The New York Times

The sequences don’t tip the scales toward or away from one of the many theories about how SARS-CoV-2 came to be — they do not suggest the virus leaked from a high-security lab in Wuhan, nor do they suggest a natural spillover event. But they do firm up the idea that the novel coronavirus was circulating earlier than the first major outbreak at a seafood market.

Related: 14 coronavirus myths busted by science

In order to determine exactly how and where the virus originated, scientists need to find the so-called progenitor virus, the one from which all other strains descended. Until now, the earliest sequences are primarily those sampled from cases at the Huanan Seafood Market in Wuhan, which was initially thought to be where the novel coronavirus first emerged at the end of December 2019. However, cases from early December and as far back as November 2019 had no ties to the market, indicating pretty early in the pandemic that the virus emerged from another spot. CLOSEhttps://imasdk.googleapis.com/js/core/bridge3.469.0_en.html#goog_916429635Volume 0% PLAY SOUND

There was one nagging issue with those first genetic sequences. Those from cases found at the market include three mutations that are missing in virus samples from cases that popped up weeks later outside of the market. The viruses missing those three mutations matched more closely with the coronaviruses found in horseshoe bats. Scientists are relatively certain that the novel coronavirus somehow emerged from bats, so it’s logical to assume the progenitor would also be missing those mutations. 

And now, Jesse Bloom of the Howard Hughes Medical Institute in Seattle has found the deleted sequences — likely some of the earliest samples — also were devoid of those mutations. (Bloom is the lead author in a letter published in May in the journal Science urging an unbiased investigation into the origins of the coronavirus, Live Science reported.)

“They’re three steps more similar to the bat coronaviruses than the viruses from the Huanan fish market,” Bloom told The New York Times. This new data hints that the virus was circulating in Wuhan well before it showed up at the seafood market, Bloom said.

“This fact suggests that the market sequences, which are the primary focus of the genomic epidemiology in the joint WHO-China report … are not representative of the viruses that were circulating in Wuhan in late December of 2019 and early January of 2020,” Bloom wrote in his paper uploaded June 22 to the preprint database bioRxiv.

According to Zimmer, about a year ago 241 genetic sequences from coronavirus patients had gone missing from an online database called Sequence Read Archive that’s maintained by the National Institutes of Health (NIH).

Bloom noticed the missing sequences when he came across a spreadsheet in a study published in May 2020 in the journal PeerJ in which the authors list 241 genetic sequences of SARS-CoV-2 through the end of March 2020; the sequences were part of a Wuhan University project called PRJNA612766 and were supposedly uploaded to the Sequence Read Archive. He searched the archive database for the sequences and got the message “No items found,” Bloom wrote in the bioRxiv paper, which has not been peer-reviewed.

Related: 11 (sometimes) deadly diseases that hopped across species

His sleuthing revealed that the deleted sequences had been collected by Aisu Fu and Renmin Hospital of Wuhan University, and a preprint of the research published from those sequences (referred to as Wang et al. 2020) suggested they came from nose swab samples from outpatients with suspected COVID-19 early in the epidemic.

Bloom couldn’t find any explanation for why the sequences had been deleted, and his emails to both corresponding authors to inquire received no response.RELATED CONTENT

20 of the worst epidemics and pandemics in history

The 12 deadliest viruses on Earth

28 devastating infectious diseases

“There is no plausible scientific reason for the deletion: the sequences are perfectly concordant with the samples described in Wang et al. (2020a,b),” Bloom wrote in bioRxiv. “There are no corrections to the paper, the paper states human subjects approval was obtained, and the sequencing shows no evidence of plasmid or sample-to-sample contamination. It therefore seems likely the sequences were deleted to obscure their existence.”

Bloom notes several limitations to his study, primarily that the sequences are only partial and include no information to give a clear date or place of collection — information crucial to tracing the virus back to its origin.

Regardless, Bloom thinks that looking more deeply at archived data from the NIH and other organizations — and piecing together the sequences — could help to paint a clearer picture of both the origin and early spread of SARS-CoV-2, all without needing on-the-ground studies in China. 

Read more about the deleted sequences at The New York Times.

https://scitechdaily.com/new-and-improved-crispr-3-0-system-for-highly-efficient-gene-activation-in-plants/

New and Improved CRISPR 3.0 System for Highly Efficient Gene Activation in Plants

TOPICS:AgricultureCRISPRDNAFood ScienceGeneticsNutritionPlant ScienceUniversity Of Maryland

By UNIVERSITY OF MARYLAND JUNE 27, 2021

CRISPR illustration. Credit: National Institutes of Health

Multiplexed gene activation system allows for four to six times the activation capacity of current CRISPR technology, with simultaneous activation of up to seven genes at once.

In a study in Nature Plants, Yiping Qi, associate professor of Plant Science at the University of Maryland (UMD), introduces a new and improved CRISPR 3.0 system in plants, focusing on gene activation instead of traditional gene editing. This third generation CRISPR system focuses on multiplexed gene activation, meaning that it can boost the function of multiple genes simultaneously. According to the researchers, this system boasts four to six times the activation capacity of current state-of-the-art CRISPR technology, demonstrating high accuracy and efficiency in up to seven genes at once. While CRISPR is more often known for its gene editing capabilities that can knock out genes that are undesirable, activating genes to gain functionality is essential to creating better plants and crops for the future.

“While my lab has produced systems for simultaneous gene editing [multiplexed editing] before, editing is mostly about generating loss of function to improve the crop,” explains Qi. “But if you think about it, that strategy is finite, because there aren’t endless genes that you can turn off and actually still gain something valuable. Logically, it is a very limited way to engineer and breed better traits, whereas the plant may have already evolved to have different pathways, defense mechanisms, and traits that just need a boost. Through activation, you can really uplift pathways or enhance existing capacity, even achieve a novel function. Instead of shutting things down, you can take advantage of the functionality already there in the genome and enhance what you know is useful.”

In his new paper, Qi and his team validated the CRISPR 3.0 system in rice, tomatoes, and Arabidopsis (the most popular model plant species, commonly known as rockcress). The team showed that you can simultaneously activate many kinds of genes, including faster flowering to speed up the breeding process. But this is just one of the many advantages of multiplexed activation, says Qi.

“Having a much more streamlined process for multiplexed activation can provide significant breakthroughs. For example, we look forward to using this technology to screen the genome more effectively and efficiently for genes that can help in the fight against climate change and global hunger. We can design, tailor, and track gene activation with this new system on a larger scale to screen for genes of importance, and that will be very enabling for discovery and translational science in plants.”

Since CRISPR is usually thought of as “molecular scissors” that can cut DNA, this activation system uses deactivated CRISPR-Cas9 that can only bind. Without the ability to cut, the system can focus on recruiting activation proteins for specific genes of interest by binding to certain segments of DNA instead. Qi also tested his SpRY variant of CRISPR-Cas9 that greatly broadens the scope of what can be targeted for activation, as well as a deactivated form of his recent CRISPR-Cas12b system to show versatility across CRISPR systems. This shows the great potential of expanding for multiplexed activation, which can change the way genome engineering works.

“People always talk about how individuals have potential if you can nurture and promote their natural talents,” says Qi. “This technology is exciting to me because we are promoting the same thing in plants – how can you promote their potential to help plants do more with their natural capabilities? That is what multiplexed gene activation can do, and it gives us so many new opportunities for crop breeding and enhancement.”

Reference: “CRISPR–Act3.0 for highly efficient multiplexed gene activation in plants” by Changtian Pan, Xincheng Wu, Kasey Markel, Aimee A. Malzahn, Neil Kundagrami, Simon Sretenovic, Yingxiao Zhang, Yanhao Cheng, Patrick M. Shih and Yiping Qi, 24 June 2021, Nature Plants.
DOI: 10.1038/s41477-021-00953-7

This work is funded by the National Science Foundation, Award #1758745 and #2029889.

We recommend

  1. Gene-Editing System Allows Rapid, Large-Scale Studies of Gene FunctionJames Kelly, SciTechDaily, 2014
  2. Accurate Evaluation of CRISPR Genome Editing: Tool Quantifies Potential Genetic ErrorsMike ONeill, SciTechDaily, 2021
  3. Bioengineers Develop a New System for Human Genome EditingJames Kelly, SciTechDaily, 2015
  4. Scientists Create a More Precise Technique to Edit Genomes of Living OrganismsJames Kelly, SciTechDaily, 2017
  5. Powerful DNA Manipulation: Improved Gene Editing With New Understanding of CRISPR-Cas9 ToolMike ONeill, SciTechDaily, 2020
  1. Crop genome editing: A way to breeding by designXie et al., The Crop Journal, 2020
  2. Efficient BoPDS Gene Editing in Cabbage by the CRISPR/Cas9 SystemMa Cunfa, et al., Horticultural Plant Journal, 2019
  3. Development of multiplex genome editing toolkits for citrus with high efficacy in biallelic and homozygous mutationsXiaoen Huang et al., Plant Mol Biol, 2020
  4. Bilinear Recovery Using Adaptive Vector-AMPSubrata Sarkar; Alyson K. Fletcher; Sundeep Rangan; Philip Schniter, IEEE Signal Processing Magazine, 2019
  5. Optimizing the CRISPR/Cas9 system for genome editing in grape by using grape promotersChong Ren et al., Horticulture Research, 2021

https://www.cnet.com/news/telegram-gets-group-video-calls-animations/

Telegram gets group video calls, animations

The encrypted-messaging app serves up some new features.

Edward Moyer headshot

Edward MoyerJune 26, 2021 4:53 p.m. PT

LISTEN- 01:32

A group video call Telegram-style, on a tablet.
You look marvelous. A group video call Telegram-style, on a tablet.Telegram; screenshot by CNET

Encrypted-messaging app Telegram added several new features this week, including group video calls. “Voice chats in any group can now seamlessly turn into group video calls — just tap the camera icon to switch your video on,” the company said in a blog post Friday.

You can tap a video to have it go full screen, and you can pin a video to keep it in the key position as other people hop onto the call. For now, group video calls top out at 30 people, but that limit will increase soon, Telegram said. You can share your screen as well. The feature is available on all devices, including desktops and tablets.

Telegram also added animated backgrounds and message animations, the company said in another post. “Meet animated backgrounds for chats — first time in a messaging app!” it said. “These multi-color gradient wallpapers are generated algorithmically and move beautifully every time you send a message.”

You can get at them through Settings. In iOS, it’s Appearance / Chat Background. In Android, it’s Chat Settings / Change Chat Background. You can also create your own backgrounds by fiddling with colors, patterns and other settings.

The message animations include emojis that blink, drool, stick out a tongue, and lots more. For details, check out Telegram’s post about group video calls and its post about animated backgrounds and message animations.

Telegram saw a surge in new users earlier this year after Tesla CEO and Twitter enthusiast Elon Musk sent tweets urging followers to drop Facebook-owned WhatsApp over recent privacy policy changes.

Read more: Signal, WhatsApp and Telegram: All the major security differences between messaging apps

https://kitchener.ctvnews.ca/technology-helping-to-map-out-kitchener-s-trail-and-park-system-1.5487021

Technology helping to map out Kitchener’s trail and park system

Jessica SmithCTV News Kitchener Videographer

@jessicasmithctv ContactPublished Saturday, June 26, 2021 4:31PM EDTLast Updated Saturday, June 26, 2021 7:17PM EDThttps://imasdk.googleapis.com/js/core/bridge3.469.0_en.html#goog_122862227Volume 90% Mapping out Kitchener’s trail system NOW PLAYINGKitchener is teaming up with a local university student to put the city’s trail system on Google Street View.

SHARE:

Share3https://platform.twitter.com/widgets/tweet_button.06c6ee58c3810956b7509218508c7b56.en.html#dnt=false&id=twitter-widget-0&lang=en&original_referer=https%3A%2F%2Fkitchener.ctvnews.ca%2Ftechnology-helping-to-map-out-kitchener-s-trail-and-park-system-1.5487021&size=m&text=Technology%20helping%20to%20map%20out%20Kitchener%E2%80%99s%20trail%20and%20park%20system&time=1624849861768&type=share&url=https%3A%2F%2Fkitchener.ctvnews.ca%2Ftechnology-helping-to-map-out-kitchener-s-trail-and-park-system-1.5487021&via=CTVKitchenerReddithttps://www.facebook.com/plugins/share_button.php?app_id=&channel=https%3A%2F%2Fstaticxx.facebook.com%2Fx%2Fconnect%2Fxd_arbiter%2F%3Fversion%3D46%23cb%3Df1606e9002361c%26domain%3Dkitchener.ctvnews.ca%26origin%3Dhttps%253A%252F%252Fkitchener.ctvnews.ca%252Ff1c40b3427c2f6%26relation%3Dparent.parent&container_width=64&href=https%3A%2F%2Fkitchener.ctvnews.ca%2Ftechnology-helping-to-map-out-kitchener-s-trail-and-park-system-1.5487021&layout=button_count&locale=en_US&sdk=joey&size=small

KITCHENER — Kitchener wants to add its trails and parks to Google Street View, and it’s enlisting the help of a University of Guelph student to make it happen.

You may have seen a bright orange tricycle maneuvering around the city.

The trike, and its driver, are part of a new initiative to map out 125 km of trails and parks, giving people the option to explore them virtually before they head out in real life.

“There’s people always looking for imagery, and street view is obviously a very popular option for people to peruse around, and maybe see where they enter a trail,” says Courtney Zinn, the innovation Lab Director for the City of Kitchener. “Right now they can’t sort of navigate down the trail, so we’re hoping to make that available.”

The man making that happen is Zhelong Chen.

The University of Guelph student has been pedaling around the city almost daily, covering between 10 and 20 km at a time.

“I have a 360 camera that is attached to our helmet, and wherever I go I just take a 360 image of the trail system,” he says.

Facial and license plates are later blurred.

“It’s to really allow people to get out and explore city parks and trails,” says Zinn. “To be able to visit them virtually and maybe discover a new place.”

She says the city is starting with multi-use trails that are maintained over the winter months, before moving onto other trails later this summer.

Signs have been installed along the routes to let people know filming is taking place.

Chen says the best part of pedaling around is the positive reactions he’s received.

“Most of them are really interested in the bike. It’s an orange-painted bike and it’s got an electric motor, so a lot of people are coming up to me. They just say to me: ‘Wow, this is a cool bike.’ And having a 30-second quick talk.”

The trail mapping initiative is set to wrap up at the end of August.

The routes will be available on Google Street View or on the city’s website.RELATED IMAGES

  • Zhelong Chen is digitally mapping out Kitchener’s trails and parks for Google Street View. (June 26, 2021)

https://www.livescience.com/how-fast-does-earth-move.html

How fast does the Earth move?

By JoAnna Wendel – Live Science Contributor about 13 hours ago

Earth races around the sun and spins on its axis.

Earth's curvature seen from space

Earth is hurtling through space. (Image credit: Adastra via Getty Images)

Earth is constantly moving. As it zooms around the sun, Earth also spins on its axis, like a basketball on the tip of a player’s finger. 

So, how fast is Earth moving? In other words, how fast is it rotating on its axis and how fast is it orbiting the sun? To go even further, how fast is the solar system orbiting the Milky Way galaxy?

Now that your head is spinning just like Earth, let’s start with the planet itself. Earth turns on its own axis about once every 24 hours (or, to be precise, every 23 hours, 56 minutes and 4 seconds). Earth measures 24,898 miles (40,070 kilometers) in circumference, so when you divide distance by time, that means the planet is spinning 1,037 mph ( 1,670 km/h).

Related: What if Earth started spinning backward?CLOSEhttps://imasdk.googleapis.com/js/core/bridge3.469.0_en.html#goog_1093257247Volume 0% PLAY SOUND

Meanwhile, Earth orbits the sun at about 67,000 mph (110,000 km/h), according to Ask an Astronomer, a blog run by astronomers at Cornell University in Ithaca, New York. Scientists know that by taking the distance Earth travels around the sun and dividing it by the length of time Earth takes to complete one orbit (about 365 days). 

Ask an Astronomer explains the math: To calculate Earth’s distance around the sun, all scientists need to do is to determine the circumference of a circle. We know that the Earth is, on average, about 93 million miles (149.6 million km) away from the sun, and we know that it travels in a generally circular path (it’s actually more elliptical, but it’s simpler to do this equation with a circle). That distance between the sun and Earth is the radius of the circle. To get the circumference of that circle, the equation is 2*pi*radius, or 2*3.14*93 million miles. Once the circumference (the distance Earth travels around the sun in one orbit) is calculated, its orbital speed can be determined. 

The solar system, which includes our sun and all of the objects that orbit it, is also moving; it’s located within the Milky Way, which orbits around the galaxy’s center. Scientists know that the Milky Way is orbiting a galactic center based on observations of other stars, said Katie Mack, a theoretical astrophysicist at North Carolina State University. If stars very far away seem to be moving, that’s because the solar system is moving compared with the relative position of those far away stars. 

To bring this concept back down to Earth, “If I start walking, I can tell that I’m moving because the buildings I pass by seem to be moving,” from in front to behind me, Mack said. If she looks at something more distant, like a mountain on the horizon, it moves a little slower because it’s farther away than the buildings, but it still moves relative to her position.

By studying other stars’ movements relative to the sun, scientists have determined that the solar system orbits the Milky Way’s galactic center at about 447,000 mph (720,000 km/h).

Then there’s the entire Milky Way, which is pulled in different directions by other massive structures, such as other galaxies and galaxy clusters. Just like scientists can tell that the solar system is moving based on the relative movement of other stars, they can use the relative movement of other galaxies to determine how fast the Milky Way is moving through the universe. 

Even though everything is moving all the time, living organisms on Earth’s surface don’t feel it for the same reason passengers on an airplane don’t feel themselves zipping through the air at hundreds of miles an hour, Mack said. When the plane lifts off, passengers feel the plane’s acceleration as it speeds down the runway and lifts off; that weighted feeling is caused by the plane’s quickly changing speed. But once the plane is flying at cruising altitude, passengers won’t feel the speed of hundreds of miles per hour because the speed doesn’t change.RELATED MYSTERIES

Why does Earth have an atmosphere?

Where did Earth’s water come from?

Will there ever be another Pangea?

The passengers won’t feel the speed because those passengers are actually moving at the same speed and direction, or velocity, as the airplane. There’s no relative motion — everyone sitting on the airplane is moving at the same speed as the airplane itself. The only way passengers might notice their and the plane’s movement is by looking out the window at the passing landscape.

For humans standing on the surface of our planet, they don’t feel Earth hurtling around the sun because they’re also hurtling around the sun at the same speed. 

Originally published on Live Science.

https://www.bbc.com/news/science-environment-57628653

China releases videos of its Zhurong Mars rover

Jonathan Amos
Science correspondent
@BBCAmoson TwitterPublished14 hours agoSharehttps://emp.bbc.com/emp/SMPj/2.43.3/iframe.htmlmedia captionThe moment China’s Zhurong robot landed on Mars

China’s space agency has released video of its Zhurong rover trundling across the surface of Mars.

The pictures were acquired by a wireless camera that the robot had placed on the ground.

The new media release also includes sequences from Zhurong’s landing in May, showing the deployment of its parachute system and the moment of touchdown.

The six-wheeled robot is investigating a region known as Utopia Planitia.

The China National Space Agency (CNSA) says Zhurong has driven 236m in 42 Mars sols (as of 27 June). A sol is a Martian day. It lasts slightly longer than an Earth day, at 24 hours and 39 minutes.

https://emp.bbc.com/emp/SMPj/2.43.3/iframe.htmlmedia captionThe Zhurong robot wiggles its wheels as it stands next to its landing platform

The latest movies were relayed back to Earth via the Tianwen-1 satellite which orbits the Red Planet.

“The orbiter and the Mars rover are in good working condition, reporting safely from Mars to the party and the motherland, and sending distant blessings on the century of the party’s founding,” a CNSA press statement said.

The first of July will mark the 100th anniversary of the founding of the Chinese Communist Party.ADVERTISEMENT

A video release was expected, especially of the landing, which occurred on 14 May. Some preview stills looking up at the parachute system from the rover’s entry capsule were handed out last week.

In the movie version, however, we see the envelope inflate in the rarefied Martian atmosphere. We also see Zhurong and its landing platform drop away from the “backshell” of the capsule; and finally a look-down camera captures the moment of touchdown as the platform’s braking rocket motor blasts the surface clear of dust.https://emp.bbc.com/emp/SMPj/2.43.3/iframe.htmlmedia captionThe Zhurong robot backs away from a wireless camera on the ground

There are three videos on the surface. The first, presumably taken shortly after Zhurong put the wireless camera on the ground, shows the robot backing away.

The second video shows Zhurong wiggling its wheels while sitting next to its landing platform. The CNSA had previously released a still from this scene.

And, finally, the third video details the rover’s roll down the ramp that got it off the landing platform on to the surface. What’s interesting about this movie is that we get sound as well. We can hear the robot’s locomotion system in action.

The nature of Mars’ atmosphere means noises don’t sound quite the same as they do on Earth. They seem somewhat muffled.https://emp.bbc.com/emp/SMPj/2.43.3/iframe.htmlmedia captionListen to Zhurong as it rolls off its landing platform

Scientists are hoping to get at least 90 Martian days of service out of Zhurong.

The robot looks a lot like the American space agency’s (Nasa) Spirit and Opportunity vehicles from the 2000s.

It weighs some 240kg. A tall mast carries cameras to take pictures and aid navigation; five additional instruments will investigate the mineralogy of local rocks and the general nature of the environment, including the weather.

Like the current American rovers (Curiosity and Perseverance), Zhurong has a laser tool to zap rocks to assess their chemistry. It also has a radar to look for sub-surface water-ice – a capability it shares with Perseverance.

Videos of the landing of Perseverance were released by Nasa shortly after its descent to the Martian surface on 18 February.

Map of Mars

https://www.mindbodygreen.com/articles/sleep-tips-for-allergy-sufferers

Allergies Messing With Your Sleep? Try This Allergist’s Top Tips

mbg Spirituality & Relationships WriterBy Sarah Regan

Image by Sergey Filimonov / StocksyJune 26, 2021 — 9:12 AMShare on:

As if daytime wasn’t rough enough for those who suffer from seasonal allergies, sniffles and sneezes can really disrupt sleep, too. To find out why—and what to do about it—we asked allergist and immunologist Heather Moday, M.D., for her top tips to help allergy sufferers fall asleep easily and wake up rested.

Why seasonal allergies can mess with sleep.

According to Moday, there are a ton of reasons why allergies can interfere with a good night’s sleep. Of course, allergies disrupt your breathing, thanks to inflammation, nasal congestion, and postnasal drip, and can also cause coughing and sneezing, making it harder to fall asleep.sleep support+The deep and restorative sleep you’ve always dreamt about*★ ★ ★ ★ ★★ ★ ★ ★ ★ (4.8)SHOP NOW

If you deal with sinus headaches or sinus pressure, the pain can also make it difficult to relax at the end of the day. You may have also noticed congestion gets worse when you lie down, due to more blood flow to your head.

“When we sleep, one of the major disruptive factors is any sort of respiratory obstruction,” Moday tells mbg, adding, “Just as sleep apnea causes us to wake up multiple times during the night, if you can’t breathe through your nose, it’s going to cause frequent awakenings or snoring.”

All of these factors can lead to sleep that’s not as restorative as it could be.ADVERTISEMENT

How to rest easier:

1. Use a neti pot or saline rinse.

“I’m a big fan of using neti pots to clear out the nasal passages and sinuses,” Moday says, “because you get a lot of pollen that collects when we breathe, and when that collects in the nasal passages and we sleep with that at night, it makes us even more inflamed.” Using a neti pot, or even a saline rinse before bed (and potentially in the morning, as well), she adds, can really help clear out mucus and allergens that are bothering you.

2. Steam with essential oils.

Steaming with essential oils like eucalyptus or rosemary can have a soothing effect on the sinuses, helping them to open up, Moday notes. All you need is a big bowl full of steaming hot water with 10 to 15 drops of essential oils that help with allergies. Drape a towel over your head, and hover your face over the water, breathing the steam in deeply.

3. Sleep in a cool room.

Moday notes that warmer temperatures can cause nasal passages to swell, so allergy sufferers will want to keep their rooms relatively cool. This aligns with the general rule of thumb that around 65 degrees Fahrenheit is the best temperature for quality sleep.

4. Shower before bed.

During the height of allergy season, pollen can get everywhere—including our hair. As such, “Showering at bedtime is a good idea, especially if you have long hair,” Moday says. That way, you’ll wash the pollen off yourself before you can track it into bed with you.

5. Wash your bedding frequently.

Nothing will flare up allergies like bedding that needs a deep clean—especially dirty pillowcases. “Change your pillowcase frequently—if not every night, every other night,” Moday suggests, adding to make sure you’re washing all your bedding in hot water and putting it in a hot dryer, which will also kill dust mites.

6. Consider investing in an air purifier.

“People don’t realize how much pollen comes into their house and gets recirculated with the dust,” Moday explains. Using a HEPA air purifier in your bedroom can help improve the air quality in your bedroom and keep allergens from irritating you.

7. See an allergist.

And of course, when all else fails, Moday notes it never hurts to see an allergist if you’ve exhausted all other options. Getting tested for what specific things you’re allergic to (whether it be grass pollen, ragweed pollen, or indoor allergens like dust and mold) can help you figure out the best course of action.

The bottom line.

Allergies are a drag, and even more so when they’re negatively affecting your sleep. Getting quality rest is essential to our overall health, so if allergies are getting you down, give these tips a try, and consider going to an allergist if you’re not seeing improvement.If you are pregnant, breastfeeding, or taking medications, consult with your doctor before starting a supplement routine. It is always optimal to consult with a health care provider when considering what supplements are right for you.

Sarah Regan

Sarah Reganmbg Spirituality & Relationships WriterSarah Regan is a Spirituality & Relationships Writer, as well as a registered yoga instructor. She received her bachelor’s in broadcasting and mass communication from SUNY Oswego,.

https://www.inverse.com/mind-body/pseudo-hallucinations-test-yourself-here

WHAT ARE “PSEUDO-HALLUCINATIONS?” WHY YOU COULD HAVE A POWERFUL MIND’S EYE

Without drugs, some people can create mental images so vivid they seem real.

ShutterstockRESHANNE REEDER6 HOURS AGO

CONSIDER THE STATEMENTS BELOW. What do they describe? A trip on psychedelics? A dream?

I felt I could reach through the screen to get to another place.

MORE LIKE THISMIND AND BODY6.26.2021 10:00 AMREADY FOR A HOT VAX SUMMER? HERE ARE 3 THINGS TO KEEP IN MIND BEFORE YOU GET THE PARTY STARTEDBy NICOLE K. MCNICHOLSMIND AND BODY6.21.2021 4:00 AMWHY THIS ONE HUMAN ART FORM MAY BE BEST FOR YOUR BRAIN HEALTHBy NAOMI S. BARONMIND AND BODY6.20.2021 1:00 PMRESIST POST-PANDEMIC BODY SHAMING WITH 8 SCIENCE-BACKED STRATEGIESBy TRACY TYLKAEARN REWARDS & LEARN SOMETHING NEW EVERY DAY.SUBMIT

Lasers became entire fans of light sweeping around, and then it felt as if the screen began to expand.

I saw old stone buildings … like a castle … I was flying above it.

In reality, they are statements that different people reported after viewing the “Ganzflicker” on their computers — an intense full-screen, red-and-black flicker that anyone can access online and that we use in our experiments.

In less than ten minutes, it creates altered states of consciousness, with no lasting effects for the brain. Visual experiences set in almost as soon as you start looking at it.

But our new study, published in Cortex, shows that while some people see castles or fractals in the Ganzflicker, others see nothing. We have come up with a theory of where those individual differences come from.

Like a computer screen, the part of your brain that processes visual information (the visual cortex) has a refresh “button” which helps it sample the environment — taking snapshots of the world in quick succession.

In other words, your brain collects sensory information with a certain frequency. Yet you see the world as continuous and dynamic, thanks to your brain’s sophisticated ability to fill in the blanks.

For example, your eyes have a blind spot right outside the center of vision, but you don’t see a patch of blackness everywhere you look. Your visual cortex extrapolates from the surrounding visual information so that your whole field of view appears to be complete. If the sensory information being processed is the Ganzflicker, this will interact with your brain’s own rhythms to alter how you fill in or interpret what you are seeing.

Ganzflicker is known to elicit the experience of anomalous sensory information in the external environment, called pseudo-hallucinations. “Simple” experiences — like seeing lasers or illusory colors — have previously been explained as your brain reacting to clashes between Ganzflicker and the brain’s rhythms. But how do some people see complex pseudo-hallucinations such as “old stone castles”?

CAPACITY FOR MENTAL IMAGES

The brain is composed of many different regions interacting with each other, including “low-level” sensory regions and regions that correspond to “high-level” cognitive processes.

Discriminating whether a line is vertical or horizontal, for example, is considered a low-level sensory process, whereas determining whether a face is friendly or annoyed is a high-level cognitive process. The latter is more open to interpretation.

Visual mental imagery, or the mental simulation of sensory information – the “mind’s eye” – is one of these high-level cognitive processes. High-level processes can interact with low-level processes to shape your brain’s interpretation of what you are seeing.

If someone sees simple pseudo-hallucinations in the Ganzflicker, their brains may automatically interpret that information as more meaningful or realistic with help from their mind’s eye.

Some people can’t see mental images.GoodIdeas/Shutterstock

What most people don’t realize is that everyone’s imagery is different. Some people have imagery that is as vivid as actually seeing something in front of them. A small proportion of people have a “blind mind’s eye” and cannot even visualize the faces of their friends or family. This condition is called aphantasia, and has attracted an increasing amount of attention in the last few years. Many people are, of course, somewhere in between these extremes.

THE POWER OF GANZFLICKER

It is very difficult to describe and compare imagery experiences, since they are private, internal, subjective events. But it turns out that the Ganzflicker can help.

We discovered that imagery ability can be reflected in an individual’s description of a ten-minute experience with Ganzflicker. Almost half of people with aphantasia see absolutely nothing in the Ganzflicker. The other half see mostly simple patterns like geometric shapes or illusory colors.

Compare that to people with visual mental imagery, for whom the majority see meaningful complex objects, such as animals and faces. Some even see entire pseudo-hallucinatory environments, like a stormy beach or a medieval castle.

Going back to the idea of brain rhythms, it’s possible that people who see imagery have naturally lower-frequency rhythms in the visual cortex — closer to the Ganzflicker frequency — which makes them susceptible to experiencing pseudo-hallucinations.

People with aphantasia, on the other hand, have naturally higher-frequency rhythms in the visual cortex – which may give them a buffer against the effects of the Ganzflicker.

Our theory is that mental imagery and pseudo-hallucinations elicited by Ganzflicker are tapping into the same processes in the brain. This means that Ganzflicker captures a dynamic projection of people’s imagined experiences, like opening a window to the mind’s eye.

Ganzflicker is therefore a promising tool for understanding individual differences in mental imagery and its interaction with the visual environment.

The experiment can help people share their unique experiences with each other — ultimately bringing subjective experience into the real world.

https://analyticsindiamag.com/now-apple-introduces-a-no-code-ai-platform/

Now Apple Introduces A No-Code AI Platform

26/06/2021

  • Trinity is composed of data pipelines, an experiment management system, a user interface, and a containerised deep learning kernel.

READ NEXT
Top 7 Quotes By John McAfee

Recently, Apple researchers, including C. V. Krishnakumar Iyer, Feili Hou, Henry Wang, Yonghong Wang, Kay Oh, Swetava Ganguli, Vipul Pandey, have developed Trinity, a no-code AI platform for complex spatial datasets. 

The platform enables machine learning researchers and non-technical geospatial specialists to experiment with domain-specific signals and datasets to solve various challenges. It tailors complex Spatio-temporal datasets to fit standard deep learning models–in this case, Convolutional Neural Networks  (CNNs), and formulate disparate problems in a standard way, eg. semantic segmentation.

Fill the Survey: Utilizing Behavioural Science to Analyze Customer Behaviour

“It creates a shared vocabulary leading to better collaboration among domain experts, machine learning researchers, data scientists, and engineers. Currently, the focus is on semantic segmentation, but it is easily extendable to other techniques such as classification, regression, and instance segmentation,” as per the paper.

Challenges

With the increase in smart devices, a high volume of data containing geo-referenced information is generated and captured. ML techniques have now entered the geospatial domain, including hyperspectral image analysis, high-resolution satellite image interpretation. However, deploying such solutions is still limited due to specific challenges, such as:

  • Processing large volumes of Spatio-temporal information and applying ML solutions involves specialised skills and hence has a high barrier of entry, preventing non-technical domain specialists from solving problems on their own.
  • The solution differs as data from residential areas will be very different from commercial ones, giving rise to non-standard preprocessing, post-processing, model deployment, and maintenance workflows.
  • Engineers process data while scientists run experiments for different problems and involve a lot of back and forth. This hampers the ability to collaborate.

Trinity tackle these challenges by: 

  • Bringing information in disparate Spatio-temporal datasets to a standard format by applying complex data transformations upstream. 
  • Standardising the technique of solving disparate-looking problems to avoid heterogeneous solutions.
  • Providing an easy-to-use code-free environment for rapid experimentation, thereby lowering the bar for entry.

It enables quick prototyping, rapid experimentation and reduces the time to production by standardizing model building and deployment.SEE ALSO

txtai

DEVELOPERS CORNER

Complete Tutorial On Txtai: An AI-Powered Search Engine

Tech stack

Trinity is composed of data pipelines, an experiment management system, a user interface, and a containerised deep learning kernel.

  • Platform’s feature store is maintained in S3 (Simple Server Storage). Intermediate data, inputs and processed predictions are stored in a distributed file system (HDFS). Metadata related to the experiments, including versions of models, are stored in an instance of a PostgreSQL DB running on an internal cloud infrastructure.
  • Internal compute clusters hosting GPU and CPU.
  • The training is containerised using Docker and orchestrated by Kubernetes running on the GPU Cluster for portability and packaging. Large-scale distributed predictions are carried out on CPU clusters orchestrated by YARN.
  • Tensorflow 2.1.0 for training deep learning models. Spark on Yarn for data preprocessing, channel processing, label handling etc.

The deep learning kernel is at the heart of the platform and encapsulates neural net architectures for semantic segmentation and provides for model training, evaluation, handling of metrics, and inference. The kernel is currently implemented in TensorFlow but can easily be swapped for other frameworks.