Zoom promised a 90-day feature freeze to fix privacy and security issues, and the company is delivering on some of those promises. A new Zoom 5.0 update is rolling out today that’s designed to address some of the many complaints that Zoom has faced in recent weeks. With this new update, there’s now a security icon that groups together a number of Zoom’s security features. You can use it to quickly lock meetings, remove participants, and restrict screen sharing and chatting in meetings.
Zoom is also now enabling passwords by default for most customers, and IT admins can define the password complexity for Zoom business users. Zoom’s waiting room feature is also now on by default for basic, single-license Pro, and education accounts. This feature allows a host to hold participants in a virtual room before they’re allowed into a meeting.
Many of these changes are clear responses to the “Zoombombing” phenomenon, where pranksters join Zoom calls and broadcast porn or shock videos. Zoom’s previous default settings didn’t encourage a password to be set for meetings, and they allowed any participants to share their screen.
Zoom is also improving some of its encryption and upgrading to the AES 256-bit GCM encryption standard. This still isn’t the end-to-end encryption that Zoom erroneously said it had implemented, but it’s an improvement for the transmission of meeting data. Business customers can also control which data center regions will handle meeting traffic for their Zoom meetings, after concerns were raised that some meetings were being routed through servers in China.
Zoom is clearly responding quickly to the issues that have been raised, just as it has seen an influx of millions of new users using its service during the novel coronavirus pandemic. Zoom reported a maximum of 10 million daily users back in December, but this skyrocketed to more than 200 million daily meeting participants in March. There are still more issues to address and improvements required, but 20 days after Zoom CEO Eric S. Yuan promised changes, we’re now starting to see exactly how Zoom is responding.
MIT work raises a question: Can robots be teammates with humans rather than slaves?
Scientists at the Computer Science and AI Laboratories at MIT used machine learning approaches to train a robot arm to take action based on question-and-response dialogues with a human teammate. Could it re-focus humanity away from its obsession with robots as slaves?
The image that most of society has of robots is that of slaves — creations that can be forced do what humans want.
Researchers at the Massachusetts Institute of Technology have formed an interesting take on the robot question that is less about slavery, more about cooperation. They observed that language is a function of humans cooperating on tasks, and imagined how robots might use language when working with humans to achieve some result.
The word “team” is a word used prominently way up top in the paper, “Decision-Making for Bidirectional Communication in Sequential Human-Robot Collaborative Tasks,” written by scientists Vaibhav V. Unhelkar, Shen Li, and Julie A. Shah of the Computer Science and AI Laboratories at MIT and posted on the MIT Web site on March 31st.
The use of the word “team” is significant given the structure of the experiment the scientists designed.
In broad terms, the work shows it’s possible to use language to help train a robotic arm to perform a task, such as helping to prepare meals in a kitchen. The approach is something called “reinforcement learning,” which has been exploited in spectacular fashion in recent years. Google’s DeepMind unit used RL to train a computer program to beat humans at chess and the game of go.
The CommPlan program uses a reinforcement learning approach called a “Markov Decision Process” to known when to utter statements, such as a planned move, to inform a human partner so as to avoid conflicts. The experiment is a kitchen set-up where humans make sandwiches and robots pour cups of juice and the two parties have to share the workspace in the most efficient manner possible.
Unhelkar et al. 2020
What’s different here is the insertion of options for verbal utterances by the robot (verbal in the sense that the computer running the robot arm speaks the utterances through a speaker using a commercial text-to-speech program.) The robot program can query a human, or make requests of a human. The experimental set-up as designed by Unhelkar and colleagues is a kitchen, where a person is making sandwiches and the robot arm is pouring juice into cups. Robot and human have to share the space, which means they have to negotiate what each will do next, at each moment, so that they don’t collide.
As a team, they get a reward whose value is dependent on the extent to which they do the task most efficiently. Another way of saying it is that, from the point of view of the robot’s programming, the robot has to seek to maximize efficiency of its actions in conjunction with what the human chooses to do, it has to take into account human intentions. The machine is programmed to cooperate, in other words.
“To our knowledge, our approach is the first to perform decision-making for multiple communication types while anticipating the human teammate’s behavior and latent states,” write Unhelkar and colleagues.
The scientific question that Unhelkar and colleagues set out to answer is whether a robot’s ability to cooperate with a person, and to optimize a task as part of a team, is enhanced via verbal communications.
That simple question is an important twist for robotics. Previous research has explored human-robot communication, but not generally with respect to a task.
The algorithm developed by Unhelkar and colleagues, called “CommPlan,” uses machine learning to develop not only the actions of the machine, as DeepMind did with go, but also its communications.
“CommPlan jointly reasons about the robot’s actions and communication to arrive at its policy.”
In the experiments they conducted, Unhelkar and colleagues used a “UR10 collaborative robot,” a robotic arm that has several ways in which it can pivot, rotate, and bend. It’s made by Universal Robots, based in the city of Odense in Denmark. A “three-finger gripper” is attached to it, made by startup Robotiq, based in Lévis, Quebec, Canada. The authors compared the performance of CommPlan to “baseline” approaches where there is either no utterances between robot and person, or where the choice of utterances at each moment is rigidly planned by the programmer.
In contrast, CommPlan is solving the equations for what utterances to use in real time, which you might call “learning” to communicate. The hypothesis was that the learned approach will yield better results than either a rigid protocol or silence.
Indeed, it did. They report that CommPlan beat both baselines. It yielded “higher cumulative reward and lower task completion times as compared to the Silent policy.” And compared to the rigidly programmed approach, “Despite making only one more communication (on average) than the Hand-crafted policy, CommPlan accrues substantially higher reward.”
As inspiring as a robot-human “team” might be, the work raises another question, Who is calling the shots? In any collaboration, including human collaborations, at times there can be one party that is telling collaborators what to do, taking the lead, acting as a sort-of boss, even if it’s ostensibly an equal partnership.
If the robot is not going to be a slave to the human, the reverse is true too, that it’s probably undesirable to build robots to enslave people. And so it’s important to observe where dominance and subservience arise.
Unhelkar and colleagues sort this out by making the way that the robot communicates with the human have a cost that affects the ultimate reward, and which is something that can be learned. That gives the programmer an indirect way to affect how the robot acts with respect to human choices.
In an email exchange, ZDNet asked Unhelkar what happens if a human chooses not to follow requests made by the robot. Unhelkar told ZDNet that by tweaking the “cost function,” variables that affect the final team reward, the robot will modify its communications to adjust to be more deferential to a human.
“If we want a more ‘polite’ robot, in the cost model, we can say a request is less costly than a command,” wrote Unhelkar. “Our model also captures that the human may follow a command differently during different steps of the task,” wrote Unhelkar. “For example, the human may not pay attention to the robot’s suggestion if it has committed to a decision, but may be open to suggestions while she is still deciding.”
“If the human declines the request, the robot replans and adapts to the human behavior.”
One can go a little further in this line of inquiry, however: Can such a partnership be humane, in the sense of ensuring the best interests of people are not trampled by a machine trying to optimize its activity to obtain some reward?
That question has been raised elegantly by Stuart Russell of UC Berkeley’s Center for Human-Compatible AI. Russell argues that the goals of artificial intelligence should be those that are in accord with the primacy of human life. In his view, that means understanding what a human might want but isn’t expressing explicitly, which comes back to the matter of communication.
Russell has suggested modifying the typical specification of an intelligent machine’s objective. Rather than saying, “Machines are intelligent to the extent that their actions can be expected to achieve their objectives,” instead he proposes, “Machines are beneficial to the extent that their actions can be expected to achieve our objectives,” where the emphasis is Russell’s.
Asked about Russell’s view, Unhelkar told ZDNet he shares Russell’s concern, and said that specifying the correct objective is “both challenging and critical to design of AI systems.”
The challenge is that a machine system in some sense has to infer what a human’s desire is. The CommPlan program is able to achieve that only in part. It infers what a human’s “latent state of decision-making” is by posing questions and observing responses from a person. But more work is needed to know what humans’ intentions are based on how they communicate, Unhelkar told ZDNet.
“The learning component of CommPlan can be extended to learn humans’ latent preferences over communications,” wrote Unhelkar in email.
“In future work, we intend to explore this extension,” added Unhelkar. The main challenge with inferring humans’ intentions via communications is that utterances in a task setting tend to be sparse, observed Unhelkar. That means it’s tricky to gather enough examples of human utterances to create a data set from which a program can learn.
“I posit that such a setting will be better suited for scenarios where the robot is interacting and learning over a longer period of time (i.e., long-term interactions),” said Unhelkar, “as it would allow collecting sufficient data of requests and utterances required for learning.”
That raises one more interesting challenge, the problem of how to safely train robots when they’re performing tasks around humans. They’re learning by trial and error, and one doesn’t want their errors to be dangerous.
In general, “errors during the trial-and-error need to be well understood and safety guaranteed before letting the robot train with humans,” Unhelkar told ZDNet. There’s also the fact that substantial training with a person takes time on the part of the human, which was not the case for DeepMind’s chess program, which was paying against itself.
To speed things along, for the time being, CommPlan is not pure machine learning. Only part of the program is “learned.”
CommPlan is an example of what’s called a “Markov Decision Process,” whereby a state of affairs and possible actions are evaluated at each turn of the task, to calculate which actions lead to the states of affairs that maximize future returns. (This is similar to but different from the method DeepMind used, a “Monte Carlo Tree Search.”)
Only some of the parameters of the Markov process are learned from data; others are programmed by the developer “by hand.” Replacing those hand-coded parameters with learned parameters is a complex task that will take time, Unhelkar told ZDNet. Unhelkar proposes leveraging “domain expertise when available, as it speeds up learning and gives us a better understanding why a robot/agent is making a certain decision.”
There is also “a huge potential for work in designing algorithms that can digest human’s domain expertise more seamlessly (e.g., by learning from high-level instructions, instead of the low bandwidth labels in the supervised learning sense),” Unhelkar told ZDNet. He cited the example of work by colleagues at MIT in which robots learn from task descriptions.
It’s way too soon to speak of robots as either master or slave. Robots today are automated mechanical structures with limited degrees of freedom, capable of only the simplest repetitive routines. But we as a society are obsessed with how we will eventually communicate with a truly sophisticated robot.
The work on CommPlan suggests maybe we should think about teamwork and partnership in preparation for when that day arrives.
SpaceX CEO and founder Elon Musk said on Twitter on Wednesday that the cause of the failure of a single Merlin engine during the most recent Starlink launch (which didn’t prevent the launch from ultimately succeeding at its mission) was the result of an undetected, “small amount” of a cleaning fluid that ignited during the flight.
The SpaceX Falcon 9 vehicle uses nine Merlin engines on its first-stage, and can still operate successfully in case one stops working. One did stop working during the ascent phase of the Starlink mission that took place on March 18. The engine failure didn’t affect the subsequent deployment of 60 Starlink satellites, which went as planned, but it did prompt an investigation into the cause by SpaceX, which was joined by NASA ahead of the commercial crew flight that will carry NASA astronauts for the first time using a Falcon 9 on May 27.
Musk said the cause of the Merlin failure was a “[s]mall amount of isopropyl alcohol (cleaning fluid) [that] was trapped in a sensor dead leg & ignited in flight.” Isopropyl alcohol is a common cleaning and disinfectant agent used in sterile environments, and is also available over the counter as rubbing alcohol for consumer use. Based on Musk’s explanation, it sounds like some was accidentally trapped in the sensor housing for a pressure valve in the Merlin’s fluid systems, and then it caught fire when the engine was ignited. That likely wasn’t enough to damage the engine, but told the sensor that heat levels were exceeding acceptable limits and caused a shutdown.
Based on the fact that NASA and SpaceX have since announced an official date for their Commercial Crew Demo-2 mission, it seems very likely that the agency was satisfied with this investigation and the cause that SpaceX identified. The issue seems relatively easy to mitigate in the future through post-cleaning checks, and even in the off-chance of a similar re-occurrence, the redundancy built into SpaceX’s Falcon 9 engine system seems very likely to be able to ensure continued successful operation of the spacecraft.
Stephen Wolfram, computer scientist, physicist, and CEO of software company Wolfram Research (behind Wolfram Alpha and Mathematica) made headlines this week when he launched the Wolfram Physics Project. The blog post announcing the project explains that he and his collaborators claim to have “found a path to to the fundamental theory of physics,” that they’ve “built a paradigm and framework,” and that they now need help with all of the computation to see if it works. Unfortunately, it seems that Wolfram is using his wealth and influence to bypass responsible science.
Here’s the background. It’s absolutely true that Wolfram is a smart man; he has a particle physics PhD from CalTech, which he finished at the age of 20. He went on to study computer simulations of cellular automata, which are essentially systems of discrete units, like pixels on a screen, where each unit evolves by following a set of rules relating to the units around it as time progresses. John Conway’s Game of Life is the perhaps most famous example of cellular automata, where after each successive time unit, pixels turn on or off based on how many pixels are on or off around them, causing complex shapes and behaviors to arise from basic rules. Wolfram went on to start a successful software company, but in the mean time, he continued researching cellular automata. This led to his controversial but popular self-published 2002book, A New Kind of Science, and now the Wolfram Physics Project, which is his newest effort to recruit scientific talent in order to build a fundamental theory of the universe based on his research.
After decades of studying cellular automata, Wolfram along with two other physicists came up with the idea that the fundamental rules of physics were beginning to evolve from smaller, less-meaningful rules, sort of like how larger structures grow from simpler steps in the cellular automata he was studying. Basically, it’s saying that the universe runs on a core set of rules, like a computer does, out of which more complexity arises. Now, the team is undertaking a centralized effort to develop their theory into something bigger by verifying its hypotheses. They’re also publishing their work open source and are calling upon academics outside the centralized effort to learn about the proposal, verify calculations, and run simulations. Essentially, they’re asking academics from diverse fields to demonstrate that the framework explains their own disciplines, and they want physicists to come up with predictions based on the framework that experiments could test.
For those of you who do want to look over Wolfram’s proposal, there’s a 448-page white paper online. But some physicists aren’t excited about Wolfram’s project and say he’s essentially buying himself influence in the field before waiting for peer review.
“In the physics community we have a process for evaluating new ideas called ‘peer review.’ Being rich isn’t a ‘get out of peer review free’ card,” said University of New Hampshire physicist Chanda Prescod-Weinstein in a tweeted statement. “I refuse to give time to the work [of] someone who doesn’t respect the community standards that I am required to obey. I am sorry to see journalists cover this, which will surely get more press than anything any barrier-breaking black scientist does this year.”
CalTech physicist Sean Carrol tweeted: “Stephen Wolfram and collaborators propose a new approach to physics based on discrete automata. Cool and fun! But: please please don’t get too excited until others look it over. Science is collaborative, it takes time, and most bold ideas are wrong.”
These critiques mirror those that accompanied Wolfram’s book. But they come down to the fact that Wolfram has isolated himself from the physics community, self-publishes his work, and promotes it to a large audience without submitting it to a formal peer-review process. That’s why many other scientists do not take him seriously.
It is absolutely possible that Wolfram has stumbled upon a deeper truth about the universe. But at the moment, he’s just another physicist with an idea. This idea should be taken as skeptically as any other that claims to explain the entire universe, meaning outside experts should check that it doesn’t contain glaring errors. Any strong hypothesis should be able to tell us something new and testable about the universe. While a Q+A on the matter says that the theory produces testable predictions, it also contains a worrying statement: Basically, Wolfram says that his idea cannot be proven wrong, writing that “Any particular rule could be proved wrong by disagreeing with observations, for example predicting particles that do not exist. But the overall framework of our models is something more general, and not as directly amenable to experimental falsification. Asking how to falsify our framework is similar to asking how one would prove that calculus could not be a model for physics. An obvious answer would be another model successfully providing a fundamental theory of physics, and being proved incompatible.” In other words, Wolfram is saying you can only prove him wrong by coming up with your own framework that solves all the mysterious of the cosmos.
A bigger-picture look at Wolfram’s work and publishing strategy reveals the unequal way new scientific ideas are treated. As a multi-millionaire (probably billionaire) who mirrors the stereotype of the solitary, white, male genius, Wolfram is able to build a beautiful website, corral collaborators, and garner lots of media coverage in order to push his idea to the forefront, outside the framework that other scientists have to operate in. Meanwhile, just last week, a study published in the Proceedings of the National Academy of Sciences demonstrated that demographically underrepresented students tend to be more innovative, but their contributions are more likely to be discounted by their peers and less likely to get them into a tenure-track position. In other words, you probably wouldn’t be hearing about this new “fundamental theory of physics” if a black woman had devised it.
Beyond that, the work promotes the unrealistic view that science is driven by single “Einsteins” coming along and rewriting everything with their paradigm-blasting ideas. This isn’t how it works—even Albert Einstein’s work built off of the research of physicists who predated him and has required testing by countless of scientists since then. In Wolfram’s case, at best the work is correct, and history will remember Wolfram’s name for research that was done by many people as part of the Wolfram Physics Project. At worst, countless hours of scientists’ time have been devoted to one rich man’s monomaniacal pursuit of explaining the universe in a way that looked nice but didn’t work at all. These are resources that could have instead been divided among countless other viable ideas.
In sum, the universe is the way it is, and we’ll figure it out sooner, later, or never. Wolfram has presented one idea as to how it works, but the only footing it has over other proposals is the fact that a wealthy and famous guy came up with it and therefore has the resources to draft people to see if it works. But it takes rigorous review in order to determine whether a proposal is valid and valuable—and if a scientist is promoting a work before it receives any peer review, maybe you should question whether it holds any water.
When you can get a WiFi-enabled microcontroller for $3, it’s little surprise that many of the projects we see these days have ditched Ethernet. But the days of wired networking are far from over, and there’s still plenty of hardware out there that can benefit from being plugged in. But putting an Ethernet network into your project requires a switch, and that means yet another piece of hardware that needs to get crammed into the build.
Seeing the need for a small and lightweight Ethernet switch, BotBlox has developed the SwitchBlox. This 45 mm square board has everything you need to build a five device wired network, and nothing you don’t. Gone are the bulky RJ45 jacks and rows of blinkenlights, they won’t do you any good on the inside of a robot’s chassis. But that’s not to say it’s a bare bones experience, either. The diminutive switch features automatic crossover, support for input voltages from 7 V all the way up to 40 V, and management functions accessible over SPI.
If you want to get up and running as quickly as possible, a fully assembled SwitchBlox is available to purchase directly from BotBlox for £149.00. But if you’re not in any particular rush and interested in saving on cost, you can spin up your own version of the Creative Commons licensed board. The C++ management firmware and Python management GUI isn’t ready for prime time just yet, but you’ll be able to build a “dumb” version of the switch with the provided KiCad design files.
The published schematic in their repo uses a Microchip KSZ8895MQXCA as the Ethernet controller, with a Pulse HX1344NL supplying the magnetics for all the ports in a single surface mount package. Interestingly, the two images that BotBlox shows on their product page include different part numbers like H1102FNL and PT61017PEL for the magnetics, and the Pulse H1164NL for the Ethernet controller.
MAKE NETWORKS WIRED AGAIN
There’s no question that WiFi has dramatically changed the way we connect devices. In fact, there’s an excellent chance you’re currently reading these words from a device that doesn’t even have the capability to connect to a wired network. If you’re looking to connect a device to the Internet quickly, it’s tough to beat.
But WiFi certainly isn’t perfect. For one, you have to contend with issues that are inherent to wireless communications such as high latency and susceptibility to interference. There’s also the logistical issues involved in making that initial connection since you need to specify an Access Point and (hopefully) an encryption key. In comparison, Ethernet will give you consistent performance in more or less any environment, and configuration is usually as simple as plugging in the cable and letting DHCP sort the rest out.
Unfortunately, that whole “plugging in” part can get tricky. Given their size, putting an Ethernet switch into your project to act as an internal bus only works if you’ve got space to burn and weight is of little concern. So as appealing as it might be to build a network into your robot to connect the Raspberry Pi, motor controllers, cameras, etc, it’s rarely been practical.
This little switch could change that, and the fact it’s released under an open source license means hackers and makers will be free to integrate it into their designs. With the addition of an open source management firmware, this device has some truly fascinating potential. When combined with a single board computer or suitably powerful microcontroller, you have the makings of a fully open source home router; something that the privacy and security minded among us have been dreaming of for years.
Google Meet, like all video chat products, is seeing rapid growth in user numbers right now, so it’s no surprise that Google is trying to capitalize on this and is quickly iterating on its product. Today, it is officially launching a set of new features that include a more Zoom-like tiled layout, a low-light mode for when you have to make calls at night and the ability to present a single Chrome tab instead of a specific window or your entire screen. Soon, Meet will also get built-in noise cancellation so nobody will hear your dog bark in the background.
If all of this sounds a bit familiar, it’s probably because G Suite exec Javier Soltero already talked to Reuters about these features last week. Google PR is usually pretty straightforward, but in this case, it moved in mysterious ways. Today, though, these features are actually starting to roll out to users, a Google spokesperson told me, and today’s announcement does actually provide more details about each of these features.
For the most part, what’s being announced here is obvious. The tiled layout allows web users to see up to 16 participants at once. Previously, that number was limited to four and Google promises it will offer additional layouts for larger meetings and better presentation layouts, as well as support for more devices in the future.
For the most part, having this many people stare at me from my screen doesn’t seem necessary (and more likely to induce stress than anything else), but the ability to present a single Chrome tab is surely a welcome new feature for many. But what’s probably just as important is that this means you can share higher-quality video content from these tabs than before.
If you often take meetings in the dark, low-light mode uses AI to brighten up your video. Unlike some of the other features, this one is coming to mobile first and will come to web users in the future.
Personally, I’m most excited about the new noise cancellation feature. Typically, noise cancellation works best for noises that repeat and are predictable. Think about the constant drone of an airplane or your neighbor’s old lawnmower. But Google says Meet can now go beyond this and also cancel out barking dogs and your noisy keystrokes. That has increasingly become table stakes, with even Discord offering similar capabilities and Nvidia RTX Voice now making this available in a slew of applications for users of its high-end graphics cards, but it’s nice to see this as a built-in feature for Meet now.
This feature will only roll out in the coming weeks and will initially be available to G Suite Enterprise and G Suite Enterprise for Education users on the web, with mobile support coming later.
Working from home brings many benefits. There’s no commute, you can eat home-cooked meals, and wear whatever you want. (We won’t tell anyone you’re not wearing pants.) But telework can also be stressful. It’s not quite like a snow day from when you were in school. After all, you’re surrounded by distractions, and the line between your home and work life is nonexistent.
Thankfully, there’s a solution that will not only lead you to be less stressed but sharpen your mind, boost your focus, and even help you cut down on mistakes. What is it? Meditation, the practice of being still and focusing on your breath. Best yet, according to research, a little can go a long way. Let’s break down the benefits of meditation, according to science.
Your mind can tend to wander when working anywhere, but especially from home, where entertainment and family members are a room away. A study from the University of Waterloo found that 10 minutes of daily mindful meditation helps keep your mind on track and is particularly effective for people who have repetitive, anxious thoughts.
For the experiment, 82 participants performed a task on a computer. The researchers then presented interruptions to gauge their ability to stay focused. Participants were split into an experiment and control group: The former was asked to engage in a short meditation exercise prior to being reassessed and the latter was given an audio story to listen to.
“Our results indicate that mindfulness training may have protective effects on mind wandering for anxious individuals,” said Mengran Xu, a researcher at Waterloo. “We also found that meditation practice appears to help anxious people to shift their attention from their own internal worries to the present-moment external world, which enables better focus on a task at hand.”
2. SHARPER MIND
A key part of meditation is focusing on the breath, and research out of Trinity College Dublin found that breathing directly affects the levels of a natural chemical messenger in the brain called noradrenaline.
According to a release explaining the study, “This chemical messenger is released when we are challenged, curious, exercised, focused or emotionally aroused, and, if produced at the right levels, helps the brain grow new connections, like a brain fertilizer. The way we breathe, in other words, directly affects the chemistry of our brains in a way that can enhance our attention and improve our brain health.”
“THESE FINDINGS ARE A STRONG DEMONSTRATION OF WHAT JUST 20 MINUTES OF MEDITATION CAN DO”
The participants in the experiment group who focused well while undertaking an attention-demanding task were found by researchers to have greater synchronization between their breathing patterns and their attention than those who had poor focus. This result, they said, means it’s possible to use breath-control practices to stabilize attention and boost brain health.
“Our attention is influenced by our breath and that it rises and falls with the cycle of respiration,” said Michael Melnychuk, lead author of the study. “It is possible that by focusing on and regulating your breathing you can optimize your attention level and likewise, by focusing on your attention level, your breathing becomes more synchronized.”
1. FEWER MISTAKES
To err is human, but surely everyone wants to be less error prone. Meditation can help with that, according to a study from Michigan State University. Researchers recruited more than 200 participants, who had never meditated before, and tasked them with a 20-minute open monitoring meditation exercise while their brain activity was measured through electroencephalography (EEG). They then completed a computerized distraction test.
“A certain neural signal occurs about half a second after an error called the error positivity, which is linked to conscious error recognition,” said study co-author Jeff Lin, who noted that meditators didn’t see any immediate performance boost. “We found that the strength of this signal is increased in the meditators relative to controls.”
His co-author, Jason Moser, added, “These findings are a strong demonstration of what just 20 minutes of meditation can do to enhance the brain’s ability to detect and pay attention to mistakes. It makes us feel more confident in what mindfulness meditation might really be capable of for performance and daily functioning right there in the moment.”
Now that you know how meditation can help you to more effectively work from home, check out our list of the best meditation apps of 2020.
A nascent line of research aimed at elucidating the neurocognitive mechanisms of mindfulness has consistently identified a relationship between mindfulness and error monitoring. However, the exact nature of this relationship is unclear, with studies reporting divergent outcomes. The current study sought to clarify the ambiguity by addressing issues related to construct heterogeneity and technical variation in mindfulness training. Specifically, we examined the effects of a brief open monitoring (OM) meditation on neural (error-related negativity (ERN) and error positivity (Pe)) and behavioral indices of error monitoring in one of the largest novice non-meditating samples to date (N = 212). Results revealed that the OM meditation enhanced Pe amplitude relative to active controls but did not modulate the ERN or behavioral performance. Moreover, exploratory analyses yielded no relationships between trait mindfulness and the ERN or Pe across either group. Broadly, our findings suggest that technical variation in scope and object of awareness during mindfulness training may differentially modulate the ERN and Pe. Conceptual and methodological implications pertaining to the operationalization of mindfulness and its training are discussed.
Human Brain’s Language Pathway Much Older than Thought
Apr 22, 2020 by
An international team of researchers led by Newcastle University Medical School has discovered an earlier evolutionary origin to the human language pathway in the brain, pushing back its origin by at least 20 million years. Previously, a precursor of the language pathway was thought by many scientists to have emerged more recently, about 5 million years ago, with a common ancestor of both apes and humans.
For neuroscientists, this discovery is comparable to finding a fossil that illuminates evolutionary history.
However, unlike bones, brains did not fossilize. Instead neuroscientists need to infer what the brains of common ancestors may have been like by studying brain scans of living primates and comparing them to humans.
“It is like finding a new fossil of a long lost ancestor. It is also exciting that there may be an older origin yet to be discovered still,” said Newcastle University’s Professor Chris Petkov, lead author of the study.
In the study, Professor Petkov and colleagues analyzed auditory regions and brain pathways in humans, apes and monkeys.
The scientists relied on brain scans from openly shared resources by the global scientific community. They also generated original new brain scans that are globally shared to inspire further discovery.
They discovered a segment of this language pathway in the human brain that interconnects the auditory cortex with frontal lobe regions, important for processing speech and language.
Although speech and language are unique to humans, the link via the auditory pathway in other primates suggests an evolutionary basis in auditory cognition and vocal communication.
“We predicted but could not know for sure whether the human language pathway may have had an evolutionary basis in the auditory system of nonhuman primates,” Professor Petkov said.
“I admit we were astounded to see a similar pathway hiding in plain sight within the auditory system of nonhuman primates.”
The study also illuminates the remarkable transformation of the human language pathway.
The key human unique difference is that the left side of this brain pathway was stronger and the right side appears to have diverged from the auditory evolutionary prototype to involve non-auditory parts of the brain.
Also since the study authors predict that the auditory precursor to the human language pathway may be even older, the work inspires the neurobiological search for its earliest evolutionary origin –the next brain ‘fossil’ — to be found in animals more distantly related to humans.
“This discovery has tremendous potential for understanding which aspects of human auditory cognition and language can be studied with animal models in ways not possible with humans and apes,” said Newcastle University’s Professor Timothy Griffiths, senior author of the study.
The findings were published in the journal Nature Neuroscience.
_____
F. Balezeau et al. Primate auditory prototype in the evolution of the arcuate fasciculus. Nat Neurosci, published online April 20, 2020; doi: 10.1038/s41593-020-0623-9
“If your kids aren’t sleeping, you aren’t sleeping,” says Moshi founder and CEO Ian Chambers.
As mindfulness apps grow increasingly popular among adults, Moshi is looking to bring mindfulness and meditative techniques to children. The app today announced the close of a $12 million Series B financing led by Accel, with participation from Latitude Ventures (the follow-on sister fund to LocalGlobe) and Triplepoint Capital. Bill Roedy, former MTV CEO, also participated in the round.
As part of the deal, Latitude Ventures’ Julia Hawkins will join the Moshi board.
Moshi was originally born out of Mind Candy, which was founded by Michael Acton Smith (founder and CEO of Calm) who created an online entertainment platform for kids called Moshi Monsters. In 2015, Smith stepped down as CEO to go build Calm, and Moshi CEO Ian Chambers stepped in, ultimately developing and launching Moshi in 2017. Mind Candy is now rebranding to Moshi.
Moshi is an app that helps kids sleep. The app offers close to 150 bits of original content, with 80 original 30-minute bedtime stories written and produced entirely by the company. Leading that charge is Steve Cleverley, the app’s Chief Creative Officer and Director of Dozing, a BAFTA-winning writer who authors, composes and produces each bit of content on the app.
Moshi’s bed time stories all follow a similar formula: verse (narration), chorus (song), and the underlying musical score. Each story is crafted with meticulous attention to detail. For example, one of the app’s most popular stories, “Mr. Snoodle’s Twilight Train” has the ‘chuga-chuga-choo-choo’ of train noise in the background throughout the story. That sound effect is timed up to the average resting heart rate of a child, purposefully lulling them into a restful place.
Moshi has also managed to get celebrities involved in the project, with narrations from Goldie Hawn and Sir Patrick Stewart, alongside other voice over actors.
The app offers parents the ability to create their own custom playlist or choose from a themed playlist within the app, such as ‘Busy Little Minds.’
Beyond sleep, Moshi is also offering mindfulness content to be used during the day, whether it’s for timeout or anxiety management or what have you.
Moshi offers a free one-week trial before charging USD$40 annually, with six pieces of content free for anyone.
The company has more than 100,000 subscribers, with 85 million stories played. Chambers told TechCrunch that 70 percent of all stories are completed.
Moshi plans to use the financing to launch new features and content in collaboration with sleep industry experts and scientists, as well as scaling up user acquisition through marketing, advertising and partnerships.
“The reason I get up in the morning to do this, and it sounds a bit cliche, but it’s the feedback,” said Chambers. “It’s the human stories of how we’re helping families to improve how they operate and having a positive impact on their health and wellbeing. That’s what excites us.”
Since I began self-isolating a month ago, I have dreamt (nightmared?) about the following: spiders falling from my ceilings, my bathroom flooding, being chased in empty streets by masked figures, waking up with my entire family having disappeared, suffocating in a crowded room and, unfortunately, much else.
Sounds unsettling, right? It gets worse. Friends have experienced everything from sleep paralysis to sleep-walking to insomnia. Others have taken to social media to share how vivid and elaborate their dreams have suddenly become, and how they’ve remembered the details long after waking, unlike before the pandemic.
According to a recent survey by Deirdre Leigh Barrett, an assistant professor of psychology at Harvard Medical School, the phenomenon of increasingly intense dreams has grown during the pandemic. Her results found many are having dreams related to the virus itself, with imagery of friends or family growing sick, or being attacked by insects. She also found that health-care workers are experiencing the most heightened dreams due to the nature of their work.
It doesn’t help that, prior to the pandemic, one in two Canadians had trouble falling or staying asleep due to chronic stress and poor mental health. In January 2019, the U.S. Centers for Disease Control and Prevention went as far as declaring sleep disorders a public health epidemic contributing to countless medical conditions, including cancer, obesity, diabetes, depression and hypertension. Add a pandemic to the mix and it’s not so shocking that so many are struggling to get through the night in peace.
Which isn’t to say people are sleeping less. In fact, with less commuting, recreational activities and errands to run, there is more opportunity to sleep than ever. An activity tracker study by Evidation Health reports that Americans are sleeping 20 per cent more since March – but that’s a privilege largely afforded to those still able to work and who have a set routine keeping them on the clock. That clock is known as a 24-hour “sleep/wake cycle,” which is regulated by a circadian rhythm. It’s controlled by our body’s biological clock, which primarily responds to social and light cues.
“Those of us who are fortunate enough to work in the daytime keep our sleep/wake cycle tethered to the 24-hour clock because of the need to be awake to get up and go to work,” explains Dr. Peter Powles, a professor at McMaster University and expert in sleep medicine and sleep disorders.
“Those who work shifts understand the difficulties of adjusting sleep patterns and can have significant health issues as a result,” he adds. “Similarly, working from home or not being able to work at all during the pandemic may result in some alteration to the daily routine of sleep and wake. For example, if one sleeps in too long, this can result in difficulty getting to sleep at night. Alternatively, if one works later at night, particularly with LED exposure from electronic devices, this can also cause difficulty because of the alerting effect of blue light.”
Add to that an increase in stress and anxiety, which always lead to more negative imagery while dreaming. Most people wake up several times during the night after each sleep cycle.
It’s in those gaps that our brains begin to encode memory, according to an article in the journal Learning & Memory by Jessica D. Payne and Lynn Nadel. The longer we’re awake, the better we can encode and recall those dreams, particularly the more emotional they are. Since anxiety disrupts sleep, the more you experience, the more likely you are to remember details of a nightmare. And the more you sleep, the more likely you are to reach longer periods of rapid eye movement (aka REM) sleep, which can lead to more vivid dreams later in the night.
“The bottom line,” says Powles, “should be to maintain a regular 24-hour rhythm with bedtimes and rise times that are consistent throughout the week. The overlap of the sleep epidemic and the current viral pandemic will allow many people the opportunity to learn from this experience and to take better sleep habits with them once all of this is over.”
We know that traumatic events have a pattern of impacting sleep. After 9/11, many reported having more intense and frequent nightmares. After the SARS outbreak in 2003, patients reported having more nightmares, along with poor sleep and mood, according to the Canadian Journal of Psychiatry. Having also been recognized as a traumatic event, the COVID-19 pandemic is already resulting in similar behaviour, which is likely to last after the health crisis has passed.
“The key is acknowledging that things may be different now,” says Dr. Brian Murray, an associate professor of neurology and sleep medicine at the University of Toronto. “But this will pass. Things will improve. We can look at this as an opportunity to protect our sleep time and doing this will help our overall health moving forward. Our society is chronically sleep-deprived. By having a chance to catch up on lost sleep, we are able to repay a massive sleep debt.”
Here are 10 tips recommended by Murray and other sleep specialists for achieving a healthier sleep routine:
Block off about eight hours for sleep a night. Avoid napping, as this can interfere with the ability to return to a normal rhythm.
The body likes routine. Set a consistent wake time and work backwards from there for your bedtime. If you have a rough night, don’t sleep in much longer the next day. The following night, you will be tired and fall back into your rhythm. Fighting this can lead to a vicious cycle. Your family should also try and keep their routine consistent.
The body clock is best synchronized by bright light in the morning. If possible, walk or exercise in the morning. Exercise helps sleep, and fight-or-flight hormones naturally rise in the morning as part of normal circadian rhythms.
Avoid screen-time in the evening. The stress of the news doesn’t help. The light from our phones and tablets, which is more intense at closer distances, sends a signal to the brain to wake up.
Avoid caffeine beyond the afternoon to prevent sleep interference. If you use caffeine, three cups (at 237 ml each) of coffee a day is the most you should have – there is little additional benefit and mostly side-effects with any more. Avoid heavy meals and alcohol before bed.
Keep a dark, quiet, slightly cool bedroom environment. If you’re having trouble calming your thoughts, try white noise, using a sound conditioner (the Marpac Dohm Classic is a popular option) or even an air purifier or fan.
The bed is for sleep. If you find yourself working, eating or even watching TV frequently there, your brain could be conditioned into thinking the bed has become a place for waking activities rather than sleep, which will have you feeling more awake once you attempt sleep.
Turn the clock away from the bed. Seeing the time may cause stress and interfere with getting back to sleep. If you are worried about missing your alarm, set two alarms. Do not use the snooze button, get up when you need to get up.
If you have to do shift work, try and stay on a shift for more days of the week than not. If you are rotating shifts, make sure they rotate clockwise.
If you are sleepy in the day and have trouble concentrating, stand up. This is a big biological driver of alertness.
Once your sleeping patterns begin to interfere with your daily activities, or are happening on more days than not, you should consult a doctor.