Manifest V3 is the latest specification for building Chrome extensions. The update was controversial in that it affected ad blockers, but Google maintained that privacy was the priority. Mozilla announced yesterday that it would support Manifest V3 extensions in Firefox to “maintain a high degree of compatibility to support cross-browser development.”
In implementing Manifest V3, Mozilla “will diverge from Chrome’s implementation where we think it matters and our values point to a different solution.” The most notable change relates to the new declarativeNetRequest (DNR) API .
It replaces the webRequest API that, according to Google, provides “access to potentially sensitive user data,” but is used by popular ad blockers. Mozilla’s solution is to continue allowing the previous approach and add support for the new one so that developers can “choose the approach that works best for them and their users.” The Chrome team has said it supports ad blockers on the platform and made changes to Manifest V3 in response to feedback.
After discussing this with several content blocking extension developers, we have decided to implement DNR and continue maintaining support for blocking webRequest. Our initial goal for implementing DNR is to provide compatibility with Chrome so developers do not have to support multiple code bases if they do not want to.
We will support blocking webRequest until there’s a better solution which covers all use cases we consider important, since DNR as currently implemented by Chrome does not yet meet the needs of extension developers.
Elsewhere, the Firefox developer agrees with Google’s decision to make sure extensions don’t essentially keep open a full page in the background in order to run. Instead, the browser will support service workers to handle background tasks and event handling.
Mozilla will also implement cross-origin protections to aid cookie privacy and implement a feature similar to Chrome’s that lets end users control what sites extensions can be active on.
Firefox will start letting developers test the Manifest V3 support in Q4 of 2021, while it will start accepting new extensions early next year. Since this is a “large platform project,” the “schedule may be pushed back or delayed due to unforeseeable circumstances.”
Summary: New research indicates the existence of an unconscious iconic memory store that supports predictions made by the global workspace theory of consciousness. It also shows that visual masking does not erase memory traces of masked stimuli but only limits conscious access.
Source: Damian Pang / University of Pennsylvania
Visual masking renders briefly presented images invisible. New research published in Scientific Reports shows that repeating strongly masked stimuli can elicit conscious perception.
Visual masking was long held to erase or overwrite memory traces of an image that is briefly presented. This new study indicates that memory traces are likely retained for brief periods of time but with limited conscious access. Visual data can accumulate in the absence of conscious awareness in this memory buffer store and elicit clear perception when sufficient evidence is available.
The global workspace theory of consciousness suggests that perception is the result of bottom-up and top-down processes: Top-down directed attention and bottom-up stimulus strength both play an important part in what ultimately enters awareness.
This process requires information to be stored subliminally and has led to predictions for the existence of such a subliminal memory buffer store that lasts at least a few hundreds of milliseconds. This new research offers empirical evidence in support of such a buffer store.
This research by Damian Pang and Stamatis Elntib found that this newly described memory buffer is time-sensitive. Meaningful extraction becomes severely compromised after around 300 milliseconds and is almost completely lost after 700 milliseconds. This time course is strikingly similar to the duration of iconic memory.
This new study shows that repeating masked visual information can elicit clear perception at short repetition intervals. (Image Credits: Damian Pang, incorporating a photo in the public domain by Aleksey Kuprikov).
While this memory buffer conforms to iconic memory in many ways, the stark difference is that its content can be unconscious. For this reason, this memory store does not seem to fit into any existing memory classification but conforms to the theoretical predictions made by the global workspace theory.
This study also shows that perception of a visual stimulus can be controlled to a very high degree when masked and repeated by varying the repetition interval. Future research in the areas of perception, consciousness, memory, and attention could employ this method to control awareness.
If you use Alexa, Echo, or any other Amazon device, you have only 10 days to opt out of an experiment that leaves your personal privacy and security hanging in the balance.
On June 8, the merchant, Web host, and entertainment behemoth will automatically enroll the devices in Amazon Sidewalk. The new wireless mesh service will share a small slice of your Internet bandwidth with nearby neighbors who don’t have connectivity and help you to their bandwidth when you don’t have a connection.
By default, Amazon devices including Alexa, Echo, Ring, security cams, outdoor lights, motion sensors, and Tile trackers will enroll in the system. And since only a tiny fraction of people take the time to change default settings, that means millions of people will be co-opted into the program whether they know anything about it or not. The Amazon webpage linked above says Sidewalk “is currently only available in the US.”
The webpage also states:
What is Amazon Sidewalk?
Amazon Sidewalk is a shared network that helps devices work better. Operated by Amazon at no charge to customers, Sidewalk can help simplify new device setup, extend the low-bandwidth working range of devices to help find pets or valuables with Tile trackers, and help devices stay online even if they are outside the range of their home wifi. In the future, Sidewalk will support a range of experiences from using Sidewalk-enabled devices, such as smart security and lighting and diagnostics for appliances and tools.
How will Amazon Sidewalk impact my personal wireless bandwidth and data usage?
The maximum bandwidth of a Sidewalk Bridge to the Sidewalk server is 80Kbps, which is about 1/40th of the bandwidth used to stream a typical high definition video. Today, when you share your Bridge’s connection with Sidewalk, total monthly data used by Sidewalk, per account, is capped at 500MB, which is equivalent to streaming about 10 minutes of high definition video.
Why should I participate in Amazon Sidewalk?
Amazon Sidewalk helps your devices get connected and stay connected. For example, if your Echo device loses its wifi connection, Sidewalk can simplify reconnecting to your router. For select Ring devices, you can continue to receive motion alerts from your Ring Security Cams and customer support can still troubleshoot problems even if your devices lose their wifi connection. Sidewalk can also extend the working range for your Sidewalk-enabled devices, such as Ring smart lights, pet locators or smart locks, so they can stay connected and continue to work over longer distances. Amazon does not charge any fees to join Sidewalk.
Amazon has published a white paper detailing the technical underpinnings and service terms that it says will protect the privacy and security of this bold undertaking. To be fair, the paper is fairly comprehensive, and so far no one has pointed out specific flaws that undermine the encryption or other safeguards being put in place. But there are enough theoretical risks to give users pause.Advertisementhttps://1f3bd6dd3cb1f750a1aef43486e37ea2.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html
FURTHER READING
Vulnerabilities in billions of Wi-Fi devices let hackers bypass firewallsWireless technologies like Wi-Fi and Bluetooth have a history of being insecure. Remember WEP, the encryption scheme that protected Wi-Fi traffic from being monitored by nearby parties? It was widely used for four years before researchers exposed flaws that made decrypting data relatively easy for attackers. WPA, the technology that replaced WEP, is much more robust, but it alsohas a checkered history.
If industry-standard wireless technologies have such a poor track record, why are we to believe a proprietary wireless scheme will have one that’s any better?
The omnipotent juggernaut
Next, consider the wealth of intimate details Amazon devices are privy to. They see who knocks on our doors, and in some homes they peer into our living rooms. They hear the conversations we’re having with friends and family. They control locks and other security systems in our home.
Extending the reach of all this encrypted data to the sidewalk and living rooms of neighbors requires a level of confidence that’s not warranted for a technology that’s never seen widespread testing.
Last, let’s not forget who’s providing this new way for everyone to share and share alike. As independent privacy researcher Ashkan Soltani puts it: “In addition to capturing everyone’s shopping habits (from amazon.com) and their internet activity (as AWS is one of the most dominant web hosting services)… now they are also effectively becoming a global ISP with a flick of a switch, all without even having to lay a single foot of fiber.”Advertisementhttps://1f3bd6dd3cb1f750a1aef43486e37ea2.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html
Amazon’s decision to make Sidewalk an opt-out service rather than an opt-in one is also telling. The company knows the only chance of the service gaining critical mass is to turn it on by default, so that’s what it’s doing. Fortunately, turning Sidewalk off is relatively painless. It involves:
Opening the Alexa app
Opening More and selecting Settings
Selecting Account Settings
Selecting Amazon Sidewalk
Turning Amazon Sidewalk Off
No doubt, the benefits of Sidewalk for some people will outweigh the risks. But for the many, if not the vast majority of users, there’s little upside and plenty of downside. Amazon representatives didn’t respond to a request for comment.
Relying on caffeine to get you through the day isn’t always the answer, according to a new study.
The researchers assessed how effective caffeine was in counteracting the negative effects of sleep deprivation on cognition. As it turns out, caffeine can only get you so far.
More than 275 participants were asked to complete a simple attention task as well as a more challenging “placekeeping” task that required completion of tasks in a specific order without skipping or repeating steps.
The study is the first to investigate the effect of caffeine on placekeeping after a period of sleep deprivation.
“We found that sleep deprivation impaired performance on both types of tasks and that having caffeine helped people successfully achieve the easier task. However, it had little effect on performance on the placekeeping task for most participants,” says Kimberly Fenn, associate professor of psychology from Michigan State University’s Sleep and Learning Lab.
“Caffeine may improve the ability to stay awake and attend to a task, but it doesn’t do much to prevent the sort of procedural errors that can cause things like medical mistakes and car accidents,” she adds.
“Caffeine increases energy, reduces sleepiness, and can even improve mood, but it absolutely does not replace a full night of sleep, Fenn says.
“Although people may feel as if they can combat sleep deprivation with caffeine, their performance on higher-level tasks will likely still be impaired. This is one of the reasons why sleep deprivation can be so dangerous.”
Fenn says that the study has the potential to inform both theory and practice.
“If we had found that caffeine significantly reduced procedural errors under conditions of sleep deprivation, this would have broad implications for individuals who must perform high stakes procedures with insufficient sleep, like surgeons, pilots, and police officers,” Fenn says. “Instead, our findings underscore the importance of prioritizing sleep.”
Would you believe us if we told you that the sentence you’re currently reading was written by an AI-driven writing machine? Sounds unbelievable, doesn’t it? Well, in today’s world of Flippy the burger-flipping robot and flying cars (well, we’re almost there), just about anything is possible.
In February 2017, 28-year old Dong Kim, one of the world’s greatest poker players, sat at a casino to play against a machine for twenty straight days. Just like every other person in the room, he was confident that he would floor his opponent. However, Libratus – the AI machine – floored Kim and three other top players.
This event, regardless of how ludicrous it may sound, triggered an exciting and yet dubious realization. In the next few years, AI may take over several industrial sectors. In fact, the average student may have to depend on an AI-driven college essay writing service to write their academic papers for them.
But how feasible is this theory? Let’s find out.
Reading the Handwriting on the Wall: AI’s Role in Writing
Over the years, a number of books and creative pieces have been written with the help of machine learning. For instance, Dinner Depression by Julia Joy Raffel was developed completely from machine learning.
So how does AI technology pull off this seemingly incredible feat? Well, the most popular way seems to be the Natural Language Generation (NLG).
NLG is a rather seamless software process that utilizes cloud computing to manufacture written narratives from data. In simpler terms, it takes data and transforms it into a human-sounding written narrative.
Apart from the use of NLG in creative writing, another way through which AI has been integrated into the writing industry is via predictive text. If you’ve ever had a predictive text generator fill out certain blanks for you when writing an essay, then you’ve had a slight taste of the AI’s role in writing.
Benefits of Using Machine Learning to Write Academic Papers
Although AI has not yet been fully accepted by the global writing community, there are several perks that come with the concept of AI writing. For starters, let’s take a quick look at the intricacies involved in writing an essay or academic paper.
“Writing is a hellish task, best snuck up on, whacked on the head, robbed and left for dead.”
This famous quote by Ann-Marie MacDonald, the author of The Way the Crow Flies, gives us an insight into how difficult and time-consuming writing can seem for both professional writers and college students. For instance, a 600-word essay may take between two to five hours to write, depending on how simple or complex it is.
However, AI can help you get more work done in significantly less time. It also eliminates the stress that comes with researching, collating, and transforming data into narratives. As such, many modern organizations are now utilizing AI in content creation. For example, The Washington Post has been experimenting with automated journalism with the use of a bot that analyzes data and then puts together news stories.
The Dismal Side of AI Writing
Inasmuch as there are several benefits to using machine learning in writing, it’s important to note that AI-generated writing lacks one important feature: the human touch. Since writing bots carry out functions based on the grammatical rules they are programmed with, they do not leave room for creativity and poetic license.
Although AI technology churns out grammatically correct, informative pieces, the finished work is often bland and lacks the extra spice and creativity that you’d get from a piece written by a human writer.
Will Machines Write Academic Papers Instead Of Students In The Future?
So here we are; facing the big question. Despite all the rave surrounding AI writers, it is unlikely that machine learning will fully overtake students in essay writing – unless the academic world lowers its standards for creativity.
Although AI writers can mimic human intelligence and transform data into narrative with the speed of light, they lack the creativity, inspiration, and emotions that you can get from a student. Robots will always be robotic and as such, will continue to churn out mechanical, stilted pieces.
Storytelling is an art reserved solely for humans and unless the technological industry experiences a dramatic shift, it may take years before AI learns to imitate this aspect of human intelligence.
As of this writing art and skill, the human imagination and creativity is an integral part that can never be replaced. Remember how you once began an essay with a childhood experience? An AI writer will definitely not be able to pull off that imaginative skill. After all, robots may have a voice, but they have no soul.
The Bottom Line
In today’s world, tools like Microsoft Word AI predictive text can help students churn out better writing. However, many experts believe that machine learning can not substitute for the human touch. With human writers, subtlety, creativity, and strong emotional bonds are formed with the reader. At best, AI writers can only serve as collaborative tools for students who want to improve their writing.
Amanda Dudley – Author’s Bio
Amanda Dudley is a professional lecturer and academic writer at college essay writing service. She’s got her Ph.D. from Stanford and she is also a native English speaker with the basic knowledge of German. Today, Amanda works with graduates and undergraduates, providing academic assistance with History-related courses. She also assists students with disabilities.
By WASHINGTON UNIVERSITY IN ST. LOUIS MAY 29, 2021
Credit: Washington University In St. Louis
New Tool Combines Ultrasound, Genetics to Activate Deep Brain Neurons
Neurological disorders such as Parkinson’s disease and epilepsy have had some treatment success with deep brain stimulation, but those require surgical device implantation. A multidisciplinary team at Washington University in St. Louis has developed a new brain stimulation technique using focused ultrasound that is able to turn specific types of neurons in the brain on and off and precisely control motor activity without surgical device implantation.
The team, led by Hong Chen, assistant professor of biomedical engineering in the McKelvey School of Engineering and of radiation oncology at the School of Medicine, is the first to provide direct evidence showing noninvasive, cell-type-specific activation of neurons in the brain of mammal by combining ultrasound-induced heating effect and genetics, which they have named sonothermogenetics. It is also the first work to show that the ultrasound- genetics combination can robustly control behavior by stimulating a specific target deep in the brain.
Results of the three years of research, which was funded in part by the National Institutes of Health’s BRAIN Initiative, were published online in Brain Stimulation on May 10, 2021.
https://www.youtube.com/embed/OaxTrOhRhbc?feature=oembed The lab of Hong Chen, associate professor in biomedical engineering at the McKelvey School of Engineering, Washington University in St. Louis, has produced the first work to show that sonothermogenetics can control behavior by stimulating a specific target deep in the brain. Credit: Chen Ultrasound Lab
The senior research team included renowned experts in their fields from both the McKelvey School of Engineering and the School of Medicine, including Jianmin Cui, professor of biomedical engineering; Joseph P. Culver, professor of radiology, of physics and of biomedical engineering; Mark J. Miller, associate professor of medicine in the Division of Infectious Diseases in the Department of Medicine; and Michael Bruchas, formerly of Washington University, now professor of anesthesiology and pharmacology at the University of Washington.
“Our work provided evidence that sonothermogenetics evokes behavioral responses in freely moving mice while targeting a deep brain site,” Chen said. “Sonothermogenetics has the potential to transform our approaches for neuroscience research and uncover new methods to understand and treat human brain disorders.”
Using a mouse model, Chen and the team delivered a viral construct containing TRPV1 ion channels to genetically-selected neurons. Then, they delivered small burst of heat via low-intensity focused ultrasound to the select neurons in the brain via a wearable device. The heat, only a few degrees warmer than body temperature, activated the TRPV1 ion channel, which acted as a switch to turn the neurons on or off.
“We can move the ultrasound device worn on the head of free-moving mice around to target different locations in the whole brain,” said Yaoheng Yang, first author of the paper and a graduate student in biomedical engineering. “Because it is noninvasive, this technique has the potential to be scaled up to large animals and potentially humans in the future.”
The work builds on research conducted in Cui’s lab that was published in Scientific Reports in 2016. Cui and his team found for the first time that ultrasound alone can influence ion channel activity and could lead to new and noninvasive ways to control the activity of specific cells. In their work, they found that focused ultrasound modulated the currents flowing through the ion channels on average by up to 23%, depending on channel and stimulus intensity. Following this work, researchers found close to 10 ion channels with this capability, but all of them are mechanosensitive, not thermosensitive.
The work also builds on the concept of optogenetics, the combination of the targeted expression of light-sensitive ion channels and the precise delivery of light to stimulate neurons deep in the brain. While optogenetics has increased discovery of new neural circuits, it is limited in penetration depth due to light scattering and requires surgical implantation of optical fibers.
Sonothermogenetics has the promise to target any location in the mouse brain with millimeter-scale resolution without causing any damage to the brain, Chen said. She and the team continue to optimize the technique and further validate their findings.
Reference: “Sonothermogenetics for noninvasive and cell-type specific deep brain neuromodulation” by Yaoheng Yang, Christopher Pham Pacia, Dezhuang Ye, Lifei Zhu, Hongchae Baek, Yimei Yue, Jinyun Yuan, Mark J. Miller, Jianmin Cui, Joseph P. Culver, Michael R. Bruchas and Hong Chen, 10 May 2021, Brain Stimulation. DOI: 10.1016/j.brs.2021.04.021
This work was supported by the National Institutes of Health (NIH) BRAIN Initiative (R01MH116981) and NIBIB (R01EB027223 and R01EB030102). This work was supported by the Hope Center Viral Vectors Core at Washington University School of Medicine.
The Apple Watch Series 3 is becoming a problem for Apple. Updating the smartwatch is getting messier and messier but, at the same time, having a $199 Watch option is an incredible deal for a brand like Apple. So what the company should do about the Apple Watch Series 3?
A few weeks ago, 9to5Mac’s Filipe Espósito said Apple should discontinue the 2017 Apple Watch Series 3 because it’s become “nearly impossible to install watchOS updates on Series 3 without having to restore the entire device first.” And he is right.
He also points out that the Series 3 has also become a problem to developers:
In addition to not delivering the experience users expect, Series 3 also upsets some developers who are forced to support the old display form factor in their apps — even if they no longer run reasonably well in terms of performance on Series 3. If the product can barely be updated without relying on tricks and advanced settings, why does Apple still sell it?
And with Filipe’s question, I’ll try to answer why having the Apple Watch Series 3 around is still a good deal for Apple, although not for new customers.
What could Apple do about the Apple Watch Series 3?
The Apple Watch Series 3 is almost 4 years old but it still looks good for the average customer. It’s relatively cheap and for those who don’t care about all the fancy things the Apple Watch Series 6 is able to perform, it’s almost a steal for $199.
So if you own the Cellular version of the Series 3, you’re probably expecting Apple to give it at least another year of a software update with watchOS 8. At the same time, if you own – or plan to buy – a Series 3 at Apple right now, you’ll get really frustrated about needing to restore your brand new Watch just to update it every other month. This is why we can’t recommend anyone buy the Apple Watch Series 3 right now.
The best thing Apple could do now is to discontinue the Apple Watch Series 3, give it another year of watchOS updates, and lower Apple Watch SE prices by around 10%. This would be enough to sell an amazing Watch for $250, only $50 more than the outdated Series 3.
It would be a win-win situation for users and Apple. There’s also precedent since Apple lowered prices of the original Apple Watch before announcing Series 2. Another thing the company could do, apart from letting users know they should restore their Series 3 to update it, is to offer a software update that needs less storage. We all know the Series 3 can’t handle multiple complications, so why not just offer an update with a few features inside of it?
Wrap up: the Apple Watch SE is the next big deal
As we are only a few days away from WWDC 2021, Apple could announce what it plans to do about the Apple Watch Series 3 very soon. One thing that’s for sure is that if Apple keeps silent, it could be worse for the company. The Apple Watch Series 3 brought great additions to the Watch but its lack of storage is now becoming a greater problem than just having it around for customers who want to try an Apple Watch for the first time.
With less than $100 differing this Watch for a much faster and capable Apple Watch SE, this should be the company’s focus right now. What do you think Apple should do about the Apple Watch Series 3? Tell us in the comment section below.
FTC: We use income earning auto affiliate links.More.
Summary: Researchers have identified significant differences in gene activity between the anterior and posterior areas of the hippocampus. Genes associated with depression and other mood disorders are more active in the anterior hippocampus, while genes linked to cognitive disorders, such as ASD, are more active in the posterior hippocampus.
Source: UT Southwestern Medical Center
A study of gene activity in the brain’s hippocampus, led by UT Southwestern researchers, has identified marked differences between the region’s anterior and posterior portions.
The findings, published today in Neuron, could shed light on a variety of brain disorders that involve the hippocampus and may eventually help lead to new, targeted treatments.
“These new data reveal molecular-level differences that allow us to view the anterior and posterior hippocampus in a whole new way,” says study leader Genevieve Konopka, Ph.D., associate professor of neuroscience at UTSW.
She and study co-leader Bradley C. Lega, M.D., associate professor of neurological surgery, neurology, and psychiatry, explain that the human hippocampus is typically considered a uniform structure with key roles in memory, spatial navigation, and regulation of emotions. However, some research has suggested that the two ends of the hippocampus – the anterior, which points downward toward the face, and the posterior, which points upward toward the back of the head – take on different jobs.https://68634d897a1c64789a44278dfd2895b3.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html
Scientists have speculated that the anterior hippocampus might be more important for emotion and mood, while the posterior hippocampus might be more important for cognition. However, says Konopka, a Jon Heighten Scholar in Autism Research, researchers had yet to explore whether differences in gene activity exist between these two halves.
For the study, Konopka and Lega, both members of the Peter O’Donnell Jr. Brain Institute, and their colleagues isolated samples of both the anterior and posterior hippocampus from five patients who had the structure removed to treat epilepsy.
Seizures often originate from the hippocampus, explains Lega, who performed the surgeries. Although brain abnormalities trigger these seizures, microscopic analysis suggested that the tissues used in this study were anatomically normal.
After removal, the samples underwent single nuclei RNA sequencing (snRNA-seq), which assesses gene activity in individual cells. Although snRNA-seq showed mostly the same types of neurons and support cells reside in both sections of the hippocampus, activity of specific genes in excitatory neurons – those that stimulate other neurons to fire – varied significantly between the anterior and the posterior portions of the hippocampus.
Marked differences in gene activity were identified in the anterior portion of the hippocampus, which points downward toward the face, and the posterior, which points upward toward the back of the head. Credit: Melissa Logies
When the researchers compared this set of genes to a list of genes associated with psychiatric and neurological disorders, they found significant matches. Genes associated with mood disorders, such as major depressive disorder or bipolar disorder, tended to be more active in the anterior hippocampus; conversely, genes associated with cognitive disorders, such as autism spectrum disorder, tended to be more active in the posterior hippocampus.https://68634d897a1c64789a44278dfd2895b3.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html
Lega notes that the more researchers are able to appreciate these differences, the better they’ll be able to understand disorders in which the hippocampus is involved.
“The idea that the anterior and posterior hippocampus represent two distinct functional structures is not completely new, but it’s been underappreciated in clinical medicine,” he says. “When trying to understand disease processes, we have to keep that in mind.”
Other UTSW researchers who contributed to this study include Fatma Ayhan, Ashwinikumar Kulkarni, Stefano Berto, Karthigayini Sivaprakasam, and Connor Douglas.
Funding: This work was funded by grants from the National Institutes of Health (NIH grants NS106447, T32DA007290, T32HL139438, NS107357), a University of Texas BRAIN Initiative seed grant (366582), the Chilton Foundation, the National Center for Advancing Translational Sciences of the NIH (under Center for Translational Medicine award UL1TR001105), the Chan Zuckerberg Initiative (an advised fund of the Silicon Valley Community Foundation, HCA-A-1704-01747), and the James S. McDonnell Foundation 21st Century Science Initiative in Understanding Human Cognition (scholar award 220020467).
About this genetics research news
Source: UT Southwestern Medical Center Contact: Press Office – UT Southwestern Medical Center Image: The image is credited to Melissa Logies
Elevate your enterprise data technology and strategy.
July 12-16Register TodayADVERTISEMENThttps://1a62f7208852e58a2deb6ce8ec7c4f43.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html
Elevate your enterprise data technology and strategy at Transform 2021.
Adversarial machine learning, a technique that attempts to fool models with deceptive data, is a growing threat in the AI and machine learning research community. The most common reason is to cause a malfunction in a machine learning model. An adversarial attack might entail presenting a model with inaccurate or misrepresentative data as it’s training, or introducing maliciously designed data to deceive an already trained model.
As the U.S. National Security Commission on Artificial Intelligence’s 2019 interim report notes, a very small percentage of current AI research goes toward defending AI systems against adversarial efforts. Some systems already used in production could be vulnerable to attack. For example, by placing a few small stickers on the ground, researchers showed that they could cause a self-driving car to move into the opposite lane of traffic. Other studies have shown that making imperceptible changes to an image can trick a medical analysis system into classifying a benign mole as malignant, and that pieces of tape can deceive a computer vision system into wrongly classifying a stop sign as a speed limit sign.STEAM, Why it Matters to Video Games, Diversity, the Future Workforce and the Economy 1710K4https://imasdk.googleapis.com/js/core/bridge3.462.0_en.html#goog_304727431https://imasdk.googleapis.com/js/core/bridge3.462.0_en.html#goog_1132703390Ad: (1:05)Skip Ad
The increasing adoption of AI is likely to correlate with a rise in adversarial attacks. It’s a never-ending arms race, but fortunately, effective approaches exist today to mitigate the worst of the attacks.
Types of adversarial attacks
Attacks against AI models are often categorized along three primary axes — influence on the classifier, the security violation, and their specificity — and can be further subcategorized as “white box” or “black box.” In white box attacks, the attacker has access to the model’s parameters, while in black box attacks, the attacker has no access to these parameters.ADVERTISEMENT
An attack can influence the classifier — i.e., the model — by disrupting the model as it makes predictions, while a security violation involves supplying malicious data that gets classified as legitimate. A targeted attack attempts to allow a specific intrusion or disruption, or alternatively to create general mayhem.
Evasion attacks are the most prevalent type of attack, where data are modified to evade detection or to be classified as legitimate. Evasion doesn’t involve influence over the data used to train a model, but it is comparable to the way spammers and hackers obfuscate the content of spam emails and malware. An example of evasion is image-based spam in which spam content is embedded within an attached image to evade analysis by anti-spam models. Another example is spoofing attacks against AI-powered biometric verification systems..
Poisoning, another attack type, is “adversarial contamination” of data. Machine learning systems are often retrained using data collected while they’re in operation, and an attacker can poison this data by injecting malicious samples that subsequently disrupt the retraining process. An adversary might input data during the training phase that’s falsely labeled as harmless when it’s actually malicious. For example, large language models like OpenAI’s GPT-3 can reveal sensitive, private information when fed certain words and phrases, researchhas shown.
Meanwhile, model stealing, also called model extraction, involves an adversary probing a “black box” machine learning system in order to either reconstruct the model or extract the data that it was trained on. This can cause issues when either the training data or the model itself is sensitive and confidential. For example, model stealing could be used to extract a proprietary stock-trading model, which the adversary could then use for their own financial gain.
Attacks in the wild
Plenty of examples of adversarial attacks have been documented to date. One showed it’s possible to 3D-print a toy turtle with a texture that causes Google’s object detection AI to classify it as a rifle, regardless of the angle from which the turtle is photographed. In another attack, a machine-tweaked image of a dog was shown to look like a cat to both computers and humans. So-called “adversarial patterns” on glasses or clothing have been designed to deceive facial recognition systems and license plate readers. And researchers have created adversarial audio inputs to disguise commands to intelligent assistants in benign-sounding audio.ADVERTISEMENThttps://1a62f7208852e58a2deb6ce8ec7c4f43.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html
In a paper published in April, researchers from Google and the University of California at Berkeley demonstrated that even the best forensic classifiers — AI systems trained to distinguish between real and synthetic content — are susceptible to adversarial attacks. It’s a troubling, if not necessarily new, development for organizations attempting to productize fake media detectors, particularly considering the meteoric rise in deepfake content online.
One of the most infamous recent examples is Microsoft’s Tay, a Twitter chatbot programmed to learn to participate in conversation through interactions with other users. While Microsoft’s intention was that Tay would engage in “casual and playful conversation,” internet trolls noticed the system had insufficient filters and began feeding Tay profane and offensive tweets. The more these users engaged, the more offensive Tay’s tweets became, forcing Microsoft to shut the bot down just 16 hours after its launch.
As VentureBeat contributor Ben Dickson notes, recent years have seen a surge in the amount of research on adversarial attacks. In 2014, there were zero papers on adversarial machine learning submitted to the preprint server Arxiv.org, while in 2020, around 1,100 papers on adversarial examples and attacks were. Adversarial attacks and defense methods have also become a highlight of prominent conferences including NeurIPS, ICLR, DEF CON, Black Hat, and Usenix.
Defenses
With the rise in interest in adversarial attacks and techniques to combat them, startups like Resistant AI are coming to the fore with products that ostensibly “harden” algorithms against adversaries. Beyond these new commercial solutions, emerging research holds promise for enterprises looking to invest in defenses against adversarial attacks.ADVERTISEMENThttps://1a62f7208852e58a2deb6ce8ec7c4f43.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html
One way to test machine learning models for robustness is with what’s called a trojan attack, which involves modifying a model to respond to input triggers that cause it to infer an incorrect response. In an attempt to make these tests more repeatable and scalable, researchers at Johns Hopkins University developed a framework dubbed TrojAI, a set of tools that generate triggered data sets and associated models with trojans. They say that it’ll enable researchers to understand the effects of various data set configurations on the generated “trojaned” models and help to comprehensively test new trojan detection methods to harden models.
The Johns Hopkins team is far from the only one tackling the challenge of adversarial attacks in machine learning. In February, Google researchers released a paper describing a framework that either detects attacks or pressures the attackers to produce images that resemble the target class of images. Baidu, Microsoft, IBM, and Salesforce offer toolboxes — Advbox, Counterfit, Adversarial Robustness Toolbox, and Robustness Gym — for generating adversarial examples that can fool models in frameworks like MxNet, Keras, Facebook’s PyTorch and Caffe2, Google’s TensorFlow, and Baidu’s PaddlePaddle. And MIT’s Computer Science and Artificial Intelligence Laboratory recently released a tool called TextFooler that generates adversarial text to strengthen natural language models.
More recently, Microsoft, the nonprofit Mitre Corporation, and 11 organizations including IBM, Nvidia, Airbus, and Bosch released the Adversarial ML Threat Matrix, an industry-focused open framework designed to help security analysts to detect, respond to, and remediate threats against machine learning systems. Microsoft says it worked with Mitre to build a schema that organizes the approaches malicious actors employ in subverting machine learning models, bolstering monitoring strategies around organizations’ mission-critical systems.ADVERTISEMENThttps://1a62f7208852e58a2deb6ce8ec7c4f43.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html
The future might bring outside-the-box approaches, including several inspired by neuroscience. For example, researchers at MIT and MIT-IBM Watson AI Lab have found that directly mapping the features of the mammalian visual cortex onto deep neural networks creates AI systems that are more robust to adversarial attacks. While adversarial AI is likely to become a never-ending arms race, these sorts of solutions instill hope that attackers won’t always have the upper hand — and that biological intelligence still has a lot of untapped potential.
As part of my ongoing running challenge to run or walk nearly 2,300 miles in two years, I’m having to use smartwatches, fitness trackers and running watches to account for all the journeys I make. Short run? Track it. Walk to the store? Slap that band on. Walk to the toilet? Let’s get those steps in.WHERE AM I?
(Image credit: End to End)
Column number: 7 Date written: 27/05/21 Days in: 87 Current location: Stanton, MO Distance traveled: 364.44 miles Distance left: 1913.56 miles Current tracker: Polar Vantage M2
The one area I haven’t been monitoring as much with these devices is my sleep. That’s in part because big watches can be annoying to wear at night; partly because some trackers can be poor at accurately judging when you go to bed; and also because I just didn’t know what to do with the sleep data collected.
I’ve come to regret this decision, though – because it turns out that sleep is even more important for working out than I’d thought (of course I know sleep is important, but I just didn’t know how important it is).
I’m a night owl: someone who’s most productive late into the night, and struggles to rise early as a result. Having to work 9-5pm doesn’t change this, even though 7am wake-ups are the norm; it just means I get fewer hours of sleep. I thought forcefully changing my circadian rhythm to fit the normal work day was my only recourse – but it turns out, this isn’t the case.
In fact, one of Dr Singh’s first bits of advice was to obey your body clock: “Being a night owl or a morning person, it is something that’s intrinsic. When [sleep specialists] are talking to people, we want them to try to sleep according to their biological clock.”
(Image credit: Future)
It isn’t only sleeping hours that are affected by your circadian rhythm, though; your peak exercise time is too. “The time of the day that you’re best at exercising really depends on your chronotype,” Dr Singh told me. “There is a certain time during the day where we’re biologically wired to be at our best. For somebody who goes to bed at 11pm and wakes at 7am, that’s between 4pm and 7pm in the evening.”
I’ve always known I’m an evening person, but using the timing stats from the Polar watch helped me to work out exactly how much of a night owl I am, and as a result figure out my optimal running times. It turns out, just before dinner is best for me – which is perfect timing to earn that pizza.
Overcompensation
My sleep data also revealed to Dr Singh that I was chronically sleep-deprived. I was getting an insufficient amount of sleep in the five days through the week and then compensating for that at the weekend, playing catch-up and sleeping extra.
But it turns out that this is a big ‘no-no’ – according to Dr Singh, it’s “really bad for you”. Oops.
So why is this? Surely it could only result in me feeling a little more tired? Much worse, it seems: “Doing this on a regular basis can cause cardiovascular or cardiometabolic side effects. The glucose metabolism gets impaired, so it makes it more difficult for people to lose weight.”
“If you’re chronically getting maybe only five or six hours of sleep a night, then it affects the way your body utilizes glucose… you become hungrier, and you crave fatty carbohydrate-rich food.” Maybe that pizza dinner isn’t such a good idea, after all.
The adverse health effect isn’t the only issue, though. Dr Singh mentioned something that will be familiar to those who have worked nights. “It’s called social jetlag”, she told us. “If, on a regular basis, I went to bed at 10pm and woke at 5am, but then on the weekend I go to bed at 2am and sleep until 10am or noon, it would almost be like every weekend, I had decided to go to California, which is three hours ahead.”
Regular social jetlag can apparently exacerbate the cardio metabolic side effects of chronic sleep deprivation, making you tired, and stopping your fat-burning workouts from being as effective. And if you’ve ever tried functioning while jetlagged, you’ll know it can affect your performance.Advertisementhttps://f14405b29130d3f43cda48d821318d11.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html
Polar Vantage M2 (Image credit: Polar)
So what’s the remedy? Quit working, so you don’t have to wake up at the crack of dawn? Actually, the solution isn’t all that dramatic.
“I would say that if you could get a little bit more sleep, even on the days you’re working, that would be great. Instead of waking up at 6.30am, you could sleep until about 7am.” Dr Singh told us.
And while that may sound like a pipe dream, it is actually achievable. By showering, preparing my overnight oats for breakfast, and pre-deciding on which of my identical-looking shirts I’d wear the night before, I have managed to get my ‘bed-to-door’ routine down to just 10 minutes.
There’s something else worth trying, too. Something that I previously thought was an exclusive activity for babies, cats sitting in the sun, and students who haven’t yet discovered Red Bull: napping. “Because of the busy lives that most people lead, sometimes the only way to accommodate that extra half-an-hour or so [of sleep] is via a nap,” Dr Singh explained.
There are some caveats, though. “You don’t want to nap too close to your bedtime, since that will make it difficult for you to fall asleep at bedtime. Those people who have difficulty sleeping shouldn’t take a nap, either, because that would further worsen their ability to sleep at night.”
Is there a ‘best’ time to take a nap? “There is usually a dip in the mid-afternoon for most people, when they feel less alert. That’s the time to take a nap.” Maybe your experience may vary, but when Dr Singh said this I immediately knew when that ‘dip’ was for me. “It’s related to your circadian rhythm. And so that’s a good time to take a nap.”
(Image credit: Future)
The recommended daily amount of sleep is seven to nine hours, and as my fitness tracker noted, I was usually falling short. Perhaps by amending my wake-up routine or finding an extra half-an-hour, I could bump that number up.
Sleep data, run better
Throughout our conversation, Dr Singh mentioned a number of reasons that sleeping is vital for fitness fans in particular. “You need sleep for muscle recovery to happen; your muscles actually store up to 70% of the glucose that’s in your bloodstream, and most of the storage function happens while you’re resting.”
Your sleep doesn’t just affect your muscles: “Your heart, your lungs, your digestive system; everything has a nightly reset, and that happens while you’re asleep.” In fact, “there isn’t a single aspect of any human performance that isn’t affected by not getting enough sleep.”Advertisementhttps://f14405b29130d3f43cda48d821318d11.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html
Some fitness trackers let you view a breakdown of the type of sleep you get and, according to Dr Singh, deep sleep is key: “Deep sleep is what is most restorative; that’s the time muscle recovery tends to happen.”
In fact, sometimes sleep is more important than exercise. “People will ask me, ‘should I wake up early to exercise or should I get an extra half-an-hour of sleep?’. Of course I’m biased,
but I’d say that it’s more important to make sure you’re well-rested, because sleep is biological and it’s a matter of balance”.
(Image credit: Andrey Popov / Shutterstock)
Off the track(er)s
Fitness trackers and their sleep-monitoring functions can be super-useful for understanding your habits, then – and they obviously were in my case – but it helps not to overthink the stats.
“If you’re able to look at a [fitness tracker] without becoming anxious about the data, then it’s a win-win situation. You get some information from it, which you can use to make some changes in your life.
“But there is actually a disorder, where some people become anxious about their sleep. When they use any sort of monitor to keep an eye on their sleep, it becomes worse.” This disorder is called Orthosomnia, and a piece in the Journal of Clinical Sleep Medicine describes how people can self-diagnose sleep problems based on feedback from gadgets, regardless of the accuracy of the data.
There’s a delicate balance, then, between using fitness trackers to improve your fitness, and incorrectly analyzing the data they collate to potentially harmful effect.Advertisementhttps://f14405b29130d3f43cda48d821318d11.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html
As such, it’s best to use this data as something to consider, rather than solid life guidance. If you use a fitness tracker then you might notice you’re waking earlier than you want, so could find a way to rise later. Or maybe you’ll spot that, like me, your sleep patterns are too irregular. But if you think you could have an actual sleep disorder or condition, it’s best to see a doctor.
I’m going to continue using fitness trackers to monitor my sleep – now that I know what I should be looking for. I think it will be easier to spot any changes, and therefore optimize my sleep to fit my workouts. But this might not be the route for everyone.
Through my conversation with Dr Meeta Singh, we discussed a few other aspects of sleep that didn’t fit so easily into this article. If you’re interested, the key points are listed below:
Regular exercise can help boost your deep sleep timings – but overtraining can make sleep much harder.
Natural light is vital for circadian rhythm – “light as an alertness pill” according to Dr Singh, making it important for natural wake-ups.
Optimizing sleep is about keeping to your circadian rhythm, getting a good quality of sleep, and retaining a consistent quantity of sleep.
You shouldn’t play video games, or any other activities where you’re actively participating (like social media) just before bed.