https://www.infoq.com/news/2020/04/puppeteer-3-firefox-support/

Browser-Automation Library Puppeteer Now Supports Firefox

LIKEDISCUSSPRINT

Mathias Bynens, Google developer working on ChromeDevTools and v8released Puppeteer 3.0. Puppeteer now supports Firefox in addition to the Chrome browser. The new version also upgraded support to the latest Chrome 81, and removed support for Node 8.

Puppeteer is a browser test automation Node.js library that provides a high-level API to control headless Chrome or Chromium over the DevTools Protocol. As such, new versions of Puppeteer are often linked to new versions of the Chrome browser and the deprecation of old Node.js versions. This is also the case in this release. While Puppeteer 2.0 supported Chrome 79 and deprecated Node 6, Puppeteer 3.0 supports the latest Chrome browser (Chrome 81) and no longer supports Node 8.

Puppeteer 3.0, however, additionally supports Firefox, a move that is poised to increase its usage for cross-browser testing purposes. Google announced in passing a first effort (code-named puppeteer-firefox) to support Mozilla’s browser at Google I/O ’19 in the Modern Web Testing and Automation with Puppeteer (Google I/O ’19) talk by Andrey Lushnikov and Joel Einbinder. Both developers have since moved to Microsoft working on Playwright, a concurrent cross-browser automation Node.js library which supports all major browser engines (ChromiumWebKit and Firefox) in a single API. Playwright nears the release its first major version with 99% of tests passing (in v0.13).

With Firefox support transitioning to the puppeteer package, the puppeteer-firefox package has been deprecated. Puppeteer can now fetch a Firefox Nightly binary. Bynens links to an example of Firefox automation with Puppeteer, whose excerpt is as follows:

To have Puppeteer fetch a Firefox binary for you, first run:
> PUPPETEER_PRODUCT=firefox npm install

To get additional logging about which browser binary is executed, run this example as:
> DEBUG=puppeteer:launcher NODE_PATH=../ node examples/cross-browser.js

- You can set a custom binary with the `executablePath` launcher option.

Bynens also mentions that improvements in the reliability of file uploads, switching to Mocha over the previous custom test runner framework, and the migration of the source code to TypeScript. Of the latter, Bynens comments:

Although this doesn’t affect the way developers can consume Puppeteer, it improves the quality of the Puppeteer type definitions which can be used by modern editors.

Developers have reacted enthusiastically on Twitter. A developer enquired:

Awesome! Is there also a way to install Firefox if puppeteer is a dependency of another package published on npm, such that npm install foo automatically installs puppeteer + matching Firefox?

Bynens answered that the requested feature would be implemented once Firefox support is no longer experimental.

More coverage on the testing ecosystem is available in the JavaScript and Web Development InfoQ Trends ReportPuppeteer is open-source software available under the Apache 2.0 license. Contributions are welcome and must respect the Puppeteer contribution guidelines.

https://www.sciencealert.com/coders-mutate-ai-systems-to-make-them-evolve-faster-than-we-can-program-them

Google Engineers ‘Mutate’ AI to Make It Evolve Systems Faster Than We Can Code Them

DAVID NIELD
17 APRIL 2020

Much of the work undertaken by artificial intelligence involves a training process known as machine learning, where AI gets better at a task such as recognising a cat or mapping a route the more it does it. Now that same technique is being use to create new AI systems, without any human intervention.

For years, engineers at Google have been working on a freakishly smart machine learning system known as the AutoML system (or automatic machine learning system), which is already capable of creating AI that outperforms anything we’ve made.

Now, researchers have tweaked it to incorporate concepts of Darwinian evolution and shown it can build AI programs that continue to improve upon themselves faster than they would if humans were doing the coding.

The new system is called AutoML-Zero, and although it may sound a little alarming, it could lead to the rapid development of smarter systems – for example, neural networked designed to more accurately mimic the human brain with multiple layers and weightings, something human coders have struggled with.

“It is possible today to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks,” write the researchers in their pre-print paper. “We demonstrate this by introducing a novel framework that significantly reduces human bias through a generic search space.”

The original AutoML system is intended to make it easier for apps to leverage machine learning, and already includes plenty of automated features itself, but AutoML-Zero takes the required amount of human input way down.

Using a simple three-step process – setup, predict and learn – it can be thought of as machine learning from scratch.

The system starts off with a selection of 100 algorithms made by randomly combining simple mathematical operations. A sophisticated trial-and-error process then identifies the best performers, which are retained – with some tweaks – for another round of trials. In other words, the neural network is mutating as it goes.

When new code is produced, it’s tested on AI tasks – like spotting the difference between a picture of a truck and a picture of a dog – and the best-performing algorithms are then kept for future iteration. Like survival of the fittest.

And it’s fast too: the researchers reckon up to 10,000 possible algorithms can be searched through per second per processor (the more computer processors available for the task, the quicker it can work).

Eventually, this should see artificial intelligence systems become more widely used, and easier to access for programmers with no AI expertise. It might even help us eradicate human bias from AI, because humans are barely involved.

Work to improve AutoML-Zero continues, with the hope that it’ll eventually be able to spit out algorithms that mere human programmers would never have thought of. Right now it’s only capable of producing simple AI systems, but the researchers think the complexity can be scaled up rather rapidly.

“While most people were taking baby steps, [the researchers] took a giant leap into the unknown,” computer scientist Risto Miikkulainen from the University of Texas, Austin, who was not involved in the work, told Edd Gent at Science. “This is one of those papers that could launch a lot of future research.”

The research paper has yet to be published in a peer-reviewed journal, but can be viewed online at arXiv.org.

Learn More

  1. Making Machine Learning Accessible and Actionable for Clinicians
    David F. Schneider et al., JAMA Network Open, 2019
  2. Prediction and Prevention Using Deep Learning
    Surafel Tsega et al., JAMA Network Open, 2019

https://www.tomshardware.com/news/chuwis-larkbox-mini-pc-intel-celeron

Chuwi Teases Intel-Powered PC That Fits in the Palm of Your Hand

Chuwi LarkBox (Image credit: Chuwi)

Chuwi shared an image of its upcoming LarkBox mini-PC on its official Twitter account this week. The Chinese manufacturer makes a lot of interesting products, but this is the first time that it has produced something this small.

The LarkBox looks like it’ll compete in the pocket-size territory to with the likes of ECS Liva Q or Q2. Chuwi’s offering features a sleek, black exterior that seems to be made out of plastic, but this isn’t confirmed. There are air vents all around the LarkBox’s body, which is important as a device this size will rely on passive cooling.

Unfortunately, Chuwi’s image only shows the front of the LarkBox, which houses a small power button. Currently, the device’s outputs are unknown. However, Chuwi did confirm that the LarkBox employs an Intel Celeron N4100 (codename Gemini Lake) CPU.

Thanks to its 6W TDP (thermal design power), the Celeron N4100 is a popular choice for miniature devices. The 14nm processor, which lacks Hyper-Threading, provides four cores that operate with a 1.1 GHz base clock and 2.4 GHz boost clock and a 4MB cache. The Celeron N4100 supports up to 8GB of RAM, depending on whether it’s DDR4 or LPDDR4. Given the LarkBox’s size, the device will likely use the latter format. We expect the LarkBox to come with 2GB or 4GB of LPDDR4 memory.

In terms of integrated graphics, the Celeron N4100 features Intel’s UHD Graphics 600 solution. The iGPU is based on the Gen2 architecture and arrives with 12 Execution Units (EUs) with a 200 MHz base clock and 700 MHz boost clock. Don’t underestimate the UHD Graphics 600 though; it can drive up to 4K resolution at a 60 Hz refresh rate. This suggests that the LarkBox should have one HDMI 2.0 port at the very least.

Chuwi didn’t reveal the pricing or availability for the LarkBox.

https://www.psychologytoday.com/us/blog/deviced/202004/why-video-chats-are-wearing-us-out

Why Video Chats Are Wearing Us Out

Technology is saving us in quarantine—but it’s also taking a toll. Here’s why.

Posted Apr 17, 2020

In the early days of the pandemic, as shelter-in-place orders became the norm and people rushed to find ways of staying connected, it became clear that technology would save us. We wouldn’t have to be alone while staying at home, and this was good news.

While remaining grateful for the incredible way in which our screens have offered us up to each other, fast forward five weeks and many of us are finding ourselves exhausted from the never-ending video calls and virtual experiences. It’s not that we don’t want the option of connecting in digital spaces, it’s just that we are finding them emotionally and energetically costly. We can’t quite name why, but they seem to take more energy than the face-to-face encounters we are used to. Oddly, in many cases, they actually leave us feeling lonely.

Given the novelty of our current reality, there’s no research to look to for specifically why this is the case right now, even though we know there is a precedent for this in online vs. embodied connectivity. We can, however, consider some basic ideas in order to help us evaluate how to best care for our selves and our relationships in this time of primarily digital connection.

While humans are neurodivergent in terms of sociability and interpersonal preferences, we are all sensual beings. When we encounter each other, we take in information from many senses. Certain people and their places have specific smells. Often, physical touch in one form or another is involved in an encounter. In essence, the mere physical presence of another has the power of stirring feelings and awakening all of our senses.

When we connect via screens, much of this is lost. Limited to only audio and visual sensory data, it’s easy for us to feel a sense of being “alone together.” This term, coined by Sherry Turkle, deftly describes the odd sense of anti-presence that we are talking about right now. Whether this is related to the limited context that each party sees of the other, or the simple lack of full sensory data, we cannot know.

Another dynamic that plagues video-based connection is the constant presence of one’s own image as they interact with others. Spontaneous and authentic communication is benefitted by the ability to be fully in the moment without the kind of acute self-awareness that comes with watching oneself during a conversation. For anyone with even a mild version of an inner critic, this can have a massive impact on how one is, or is not, in the present moment of a conversation in an authentic and available way. There’s a certain kind of cognitive dissonance here. We are on the call to connect with another but our ambient awareness of ourself redirects our attention.

Finally, the odd new way in which time moves is likely a contributor to our feelings of exhaustion and overwhelm in regards to interpersonal interaction. When we were moving about in the world, every request for a get together was made with an awareness that peoples’ calendars were full. Very likely we felt as though we had greater agency and options in responding to peoples’ requests of us.

Now that all of our connections are done from home, I find people responding in primarily one of two ways to interpersonal offerings. One response is fearguilt, exhaustion, or resentment which comes from feeling as though there is no “excuse” for saying no to a gathering or event since everyone knows we’re all home and “available.” The second is an automatic “yes” to as many offerings as possible to distract from the realities of our present situation.

It’s crucial for us to pace ourselves in this time of physical distancing. To be healthy in our relationships with our selves and with others, we must consider what is and isn’t working in this new economy. The following suggestions offer ways to enhance our digital connections and strengthen our intentionality in how we relate to self and others in this time of physical isolation.

1. Tend to your self.

Just as it’s important to don our own oxygen mask before helping others with theirs, it’s important to ask ourselves, “Will my emotional and energetic cup be filled or depleted by video conversations today?” To stay healthy, we must take our own internal-well-being “temperature” before we simply say “yes” to every opportunity to connect. We must also realize that our needs and desires for connection in digital spaces may change from day to day or, even, from hour to hour.

If we are feeling overwhelmed, emotionally dysregulated, or exhausted, it may be best to decline an online connection or two in order to preference some intentional connection with our own selves. We can only pour out to others from a full cup and filling our emotional reserves takes intentional effort.

2. Consider covering the image of your self on the screen.

There is a certain level of hyper-attuned self-awareness that happens when we observe physical images of ourselves. When we are experiencing this in real time during conversations, we are taken out of the moment and a part of our brain gets active evaluating how we are presenting our selves. This makes for costly conversations.

My son suggested I cover the square containing my image with a sticky note when I commented on this dynamic recently. This single action has saved me in the past week. I encourage you to give this a whirl in platforms where you can’t hide your image. In platforms that you can, give that a try.

3. Make intentional choices about what platforms you use to communicate within and who you choose to engage in them.

As we all know, the rise in subscriptions to services like Zoom and apps like Marco Polo have been staggering of late. It is incredible that we have these tools to help us stay connected. It is also important to think intentionally about which platforms are least taxing with each conversation we hope to have.

Be sensitive to dynamics like lag or pixelated images as these make our conversations more draining. Ask others which platforms they prefer and respect their wishes if and when you can. If you are coordinating with people with less technological savvy than you, provide plenty of time to help orient them before the time of contact.

4. Consider the phone some of the time.

In this time of so many video connections, the limited focus of a phone call may actually enhance authenticity and felt/lived connection. Setting the stage for a call can enhance this even more. Brew a fragrant cup of tea and find a window to gaze out of. Resist the urge to multi-task while on the phone and bring yourself fully to the call. Notice how this feels in relation to video calls and connections.

5. Get creative about ways of increasing the authenticity and spontaneity of digital encounters.

Privilege experiences where you are sharing space with others but possibly not just sitting statically and looking at each other. I participate in a weekly movement experience with 50 other people. We all move in our own physical spaces, however we’d like, to the same music. While we can see the grid containing all the images of others, no one is focused there. It is surprisingly connecting and powerful. Similar spaces exist for art-making and meditation and, I’m sure, a million other activities. Seek these out or create these for your community.

Let’s all keep connecting as we can, tending to the ways in which we do so. We’re in this together and we’ll get through this together. Let’s do so with care.

References

Dodgen-Magee, D. (2018). Deviced! Balancing Life and Technology in a Digital World. New York, NY: Rowman & Littlefield.

Doreen Dodgen-Magee, Psy.D.

Doreen Dodgen-Magee, Psy.D., is a psychologist, author, and speaker who focuses on how technology shapes people. Far from technology averse, her research centers on moderation with technology use alongside an intentionally rich embodied life. Her book, Deviced! Balancing Life and Technology in a Digital Age was awarded the 2018 Gold Nautilus Award for Psychology and has been featured in the New York Times, Time Magazine, and the Washington Post. Doreen is a celebrated podcast guest with appearances, among others, on Getting Curious with Jonathan Van Ness and PRI’s Innovation Hub.

https://www.vancouverisawesome.com/vancouver-news/ubc-breaks-down-speaking-moistly-2263252

UBC speech scientist breaks down the threat of ‘speaking moistly’

“What’s become clear in the current situation is that we need to have researchers coming together who really understand speech and who really understand airborne pathogens.”
facemasks-moist

Photo: Face masks / Getty

Is ‘speaking moistly’ something that needs to be taken seriously?

While Canada’s top doctor, Dr. Theresa Tam, has underscored the importance of physical distancing to slow the spread of COVID-19, not everyone has taken the recommendation seriously. In fact, a recent polls finds that over a quarter of Canadians (26%) are not following the recommendations from public-health officials to their full extent.

And while the government hasn’t ordered everyone to must wear a face mask or covering, they have stated that these things may prevent one’s own saliva from spreading to others. In other words, ‘speaking moistly’ is a real thing.

Bryan Gick, a speech scientist with UBC Language Sciences, says that people release small particles of saliva when they speak, sneeze, cough, or even just breathe. He adds that depending on how big these particles are, they may remain airborne for quite some time or land on surfaces.

“Both of these have issues when it comes to conveying infectious pathogens,” explains Gick. “In the one case you’ve got particles that are breathed in, and in the other case you’ve got particles that can be picked up by touch.”

While talking produces less saliva than coughing or sneezing, Gick notes that talking produces it over a longer period of time. Even in a short conversation, he says that thousands of aerosol particles are being released.

“If you step into an elevator where somebody just had a conversation with family members, it may be empty but there still may be aerosol particles that you’re breathing in,” he notes.

Gick also remarks that ‘speaking moistly’ could apply to ‘super-emitters’ — people who generate more than the typical number of droplets during speech and who are potentially super-spreaders.

According to Gick, even specific sounds, languages or dialects could be the culprits.

“Aspirated sounds are sounds where you take a breath of air, generate a lot of air pressure and produce a big burst, like “pah” and “tah.” Radio hosts know about “pah” and “tah” because they produce a really loud pop in the microphone. Most dialects of French don’t have these sounds. In English you might say “please” with that burst, but in a French accent it’s a softer “p” that sounds much more like an English “b”.”

With this in mind, Gick doesn’t know if certain languages could be more dangerous. He notes that speech isn’t associated to illness the way that coughing and sneezing are.

“What’s become clear in the current situation is that we need to have researchers coming together who really understand speech and who really understand airborne pathogens. If we’re not all working together on this, we’re not going to understand it fast enough to be able to come up with solutions.”

https://scitechdaily.com/high-tech-contact-lenses-correct-color-blindness/

High-Tech Contact Lenses Correct Color Blindness

Color Blindness Correct Contact Lense Artists Concept

Researchers apply metasurfaces to standard contact lenses for customizable color correction.

Researchers have incorporated ultra-thin optical devices known as metasurfaces into off-the-shelf contact lenses to correct deuteranomaly, a form of red-green color blindness. The new customizable contact lens could offer a convenient and comfortable way to help people who experience various forms of color blindness.

“Problems with distinguishing red from green interrupt simple daily routines such as deciding whether a banana is ripe,” said Sharon Karepov from Tel Aviv University in Israel, a member of the research team. “Our contact lenses use metasurfaces based on nano-metric size gold ellipses to create a customized, compact and durable way to address these deficiencies.”

In The Optical Society (OSA) journal Optics Letters, Karepov and colleagues report that, based on simulations of color vision deficiency, their new metasurface-based contact lens can restore lost color contrast and improve color perception up to a factor of 10.

View from Metasurface Contact Lenses

The approach used to introduce new and tailor-designed functionalities to contact lenses could be expanded to help other forms of color vision deficiency and even other eye disorders, according to the researchers.

Customized correction

Deuteranomaly, which occurs mostly in men, is a condition in which the photoreceptor responsible for detecting green light responds to light associated with redder colors. Scientists have known for more than 100 years that this vision problem can be improved by reducing detection of the excessively perceived color but achieving this correction in a comfortable and compact device is challenging.

“Glasses based on this correction concept are commercially available, however, they are significantly bulkier than contact lenses,” said Karepov. “Because the proposed optical element is ultrathin and can be embedded into any rigid contact lens, both deuteranomaly and other vision disorders such as refractive errors can be treated within a single contact lens.”

To solve this problem, the researchers turned to metasurfaces — artificially fabricated thin films designed with specific optical properties. Metasurfaces made of nanoscale gold ellipses have been extensively studied in the past few decades and can be designed to achieve specific effects on the light transmitted through them. However, the researchers needed a way to get metasurfaces, which are conventionally made on flat surfaces, onto the curved surfaces of contact lenses.

“We developed a technique to transfer metasurfaces from their initial flat substrate to other surfaces such as contact lenses,” said Karepov. “This new fabrication process opens the door for embedding metasurfaces into other non-flat substrates as well.”

From a flat to curved surface

The researchers tested the optical response of the metasurface after every step of the new fabrication procedure and acquired microscopy images to closely examine the structure of the metasurface. Their measurements showed that the metasurface’s light manipulation properties did not change after transfer to the curved surface, indicating that the process was successful.

The researchers then used a standard simulation of color perception to quantify the deuteranomaly color perception before and after introducing the optical element. They found an improvement of up to a factor of 10 and showed that visual contrast lost due to deuteranomaly was essentially fully restored.

Although clinical testing would be needed before the contact lenses could be marketed, the researchers say that manufacturers could embed the metasurface during the molding stage of contact lens fabrication or thermally fuse them to a rigid contact lens. They plan to keep studying and improving the metasurface transfer process and test it for other applications.

Reference: “Metasurface based contact lenses for color vision deficiency” by Sharon Karepov and Tal Ellenbogen, 5 March 2020, Optical Letters.
DOI: doi.org/10.1364/OL.384970

Funding for the research was provided by the generosity of Eric and Wendy Schmidt by recommendation of the Schmidt Futures program.

https://insideevs.com/news/410349/tesla-truck-changes-musk/amp/

https://newatlas.com/science/fat-storage-olfactory-scents-smells/

New research shows how specific odors can turn fat storage on or off

Baylor research has identified the pathways through which the presence or absence of certain smells can trigger or switch off fat storage in the intestines
New research has identified the pathways through which the presence or absence of certain smells can trigger or switch off fat storage in the intestines
VIEW 3 IMAGES

Olfactory signals can switch fat storage mechanisms on and off without having any effect on appetite or eating habits, says a Baylor research team that’s traced the way olfactory nerves regulate fat metabolism in C. elegans worms.

The worms were chosen due to the simplicity of their olfactory systems. C. elegans carry just three pairs of olfactory neurons, using combinations of them to detect a small range of smells that are handy for worms. The human system is similar in structure but vastly more complex, with somewhere between 10 and 20 million olfactory receptor neurons, and can distinguish between a much broader palette of scents fair and foul.

The research team, led by Dr. Ayse Sena Mutlu, a postdoctoral fellow at Baylor’s Huffington Center on Aging, used optogenic light stimulation to activate individual smell neurons in the worms, tracing the effects through selective neural circuits to neuroendocrine pathways that control fat storage mechanisms in the intestine.

Stimulated Raman scattering microscopy image of the worm C. elegans. The image shows fat storage tissue – yellow pixels show high fat levels – with the olfactory AWC neuron pseudo-colored in blue. In the background are the chemical structures of different odorants.

Stimulated Raman scattering microscopy image of the worm C. elegans. The image shows fat storage tissue – yellow pixels show high fat levels – with the olfactory AWC neuron pseudo-colored in blue. In the background are the chemical structures of different odorants.
Dr. Ayse Sena Mutlu/Dr. Meng Meng lab

Placed alongside a group of untreated control worms, the nerve-stimulated worms showed no significant difference in the amounts they ate, moved or defecated, but showed significantly changed fat content levels in their anterior intestine, indicating that the presence or absence of certain smells had the power to control fat metabolism.

We may have to watch not only what we eat, but what we smell

“Although more research is needed, it is possible that certain scents might trigger changes in fat metabolism resulting in weight loss,” said Dr. Meng Wang, professor of molecular and human genetics, a member of the Huffington Center On Aging and a Howard Hughes Medical Institute investigator at Baylor. “We may have to watch not only what we eat, but what we smell.”

The research, said Dr. Mutlu, also pointed to a possible mechanism for the link between neurodegenerative diseases such as Alzheimers and obesity, if the degeneration of olfactory nerves causes them to stop sending critical messages to the gut.

The research was published in Nature Communications.

Source: Baylor College of Medicine

https://www.androidpolice.com/2020/04/17/months-of-research-finally-crack-android-malware-that-could-even-survive-factory-resets/

Months of research finally crack Android malware that could even survive factory resets

 

Earlier this year, a story made the rounds of a new Android malware that persisted between factory resets, called xHelper. At the time, we didn’t know how it accomplished that, but security researchers have since dug into its inner workings, revealing an incredibly sophisticated system that installs itself to an Android phone’s system partition, and even changes how the system works to prevent it from being “easily” removed.

The details come courtesy of a Kaspersky researcher (spotted by Ars Technica), who discovered that the malware downloads a rootkit that primarily affects Android versions 6-7 — somehow affecting Chinese phones more than others. Once it has root privileges, it directly installs malware to the system partition that is capable of re-infecting the phone at any time, and it’s especially pernicious and difficult to remove.

That’s because usually the system partition can’t be written to. During normal system operation, it’s mounted as read-only, so a user can’t simply uninstall an app to get rid of all the malware’s many tendrils, it’s buried deep inside together with the components your phone needs to work. Furthermore, the files the malware writes are given an additional immutable attribute, so even a rooted user in the know can’t easily muck them out. But that’s not the only trick up xHelper’s sleeve, it also modifies an internal Android system library (libc) to disable mounting the system partition in write mode at all, and outright uninstalls root-friendly apps like Superuser that might make the process a little easier.

You can remove the malware via recovery — either by completely reflashing the device with stock images or through a more tedious replacement of the compromised system components for manual removal — but even then, some of the factory images for these cheap Chinese devices come loaded with malware which simply pulls down xHelper all over again. The only real way to win in that case is to flash a more secure ROM (if one is even available) or replace the phone.

Previously Malwarebytes claimed that the Play Store was somehow serving as an avenue of xHelper’s reinfection, though concrete proof regarding those claims was never surfaced, and a Google representative couldn’t confirm Malwarebytes’ theories when we spoke to them earlier this year.

Estimates for the number of affected phones infected by xHelper previously ranged from around 45,000 to 33,000, but again, only devices running older, less secure versions of Android should be susceptible to the rootkit exploits used. (If you’re using such an Android phone, please try to upgrade to something more secure if you can, it’s in your best interests.) Odds are you aren’t affected, but it’s still fascinating to see the level of ingenuity used by malware these days.

https://www.technologyreview.com/2020/04/17/1000092/ai-machine-learning-watches-social-distancing-at-work/

Machine learning could check if you’re social distancing properly at work

Andrew Ng’s startup Landing AI has created a new workplace monitoring tool that issues an alert when anyone is less than the desired distance from a colleague.

Six feet apart: On Thursday, the startup released a blog post with a new demo video showing off a new social distancing detector. On the left is a feed of people walking around on the street. On the right, a bird’s-eye diagram represents each one as a dot and turns them bright red when they move too close to someone else. The company says the tool is meant to be used in work settings like factory floors and was developed in response to the request of its customers (which include Foxconn). It also says the tool can easily be integrated into existing security camera systems, but that it is still exploring how to notify people when they break social distancing. One possible method is an alarm that sounds when workers pass too close to one another. A report could also be generated overnight to help managers rearrange the workspace, the company says.

Under the hood: The detector must first be calibrated to map any security footage against the real-world dimensions. A trained neural network then picks out the people in the video, and another algorithm computes the distances between them.

Workplace surveillance: The concept is not new. Earlier this month, Reuters reported that Amazon is also using similar software to monitor the distances between their warehouse staff. The tool also joins a growing suite of technologies that companies are increasingly using to surveil their workers. There are now myriad cheap off-the-shelf AI systems that firms can buy to watch every employee in a store, or listen to every customer service representative on a call. Like Landing AI’s detector, these systems flag up warnings in real time when behaviors deviate from a certain standard. The coronavirus pandemic has only accelerated this trend.

Dicey territory: In its blog post, Landing AI emphasizes that the tool is meant to keep “employees and communities safe,” and should be used “with transparency and only with informed consent.” But the same technology can also be abused or used to normalize more harmful surveillance measures. When examining the growing use of workplace surveillance in its annual report last December, the AI Now research institute also pointed out that in most cases, workers have little power to contest such technologies. “The use of these systems,” it wrote, “pools power and control in the hands of employers and harms mainly low-wage workers (who are disproportionately people of color).” Put another way, it makes an existing power imbalance even worse.