Custom and preset backgrounds can now be turned off in meetings by an admin
(Image credit: Google)
The rapid transition to working from home during the pandemic was made much easier thanks to Google Meet and other video conferencing software.
For its part, Google has continually updated Meet with new features such as breakout rooms, hand raising, polls and more. However, one of the most requested features the search giant added to Meet last year was the ability to use a custom background.
With a custom background enabled, meeting participants can not only hide their messy rooms but they can also express their personality or interests while in a video call. Google also gave Meet users the ability to blur their background so that other participants wouldn’t be able to see what’s behind them.
However, while custom backgrounds can be fun and help alleviate meeting fatigue, they can also be distracting which is why Google has added a new admin setting to Meet to control background replacement in video calls.RECOMMENDED VIDEOS FOR YOU…CLOSEhttps://imasdk.googleapis.com/js/core/bridge3.436.0_en.html#goog_133974385600:00 of 01:24Volume 0% PLAY SOUND
Disabling custom and preset backgrounds
In a new Google Workspace update, the search giant has added the ability for admins to enable or disable the use of custom or preset Backgrounds in Google Meet. It’s worth noting that this setting is only available in meetings organized by an organizational unit (OU) level.
The new admin setting will determine whether participants can change their background when joining a meeting. This means that if a meeting organizer has turned this setting off, participants will not have the option to change their background regardless of their own personal settings.
For admins this feature will be on by default but it can be disabled at the OU or group level. However, the option will be disabled by default for Education and Enterprise for Education domains.
THE THALAMUS, CONSIDERED THE SEAT OF CONSCIOUSNESS, SITS DEEP IN THE BRAIN MAKING IT HARD TO REACH IN SURGERY, BUT ULTRASOUND PULSES HAVE STIMULATED IT IN PEOPLE IN MINIMALLY RESPONSIVE STATES. IMAGE CREDIT: SCIEPRO/SHUTTERSTOCK.COMADVERTISMENT
Scientists have proven that previous success in reviving a patient’s consciousness using ultrasound was not a fluke. Two more people previously trapped in long-term “minimally responsive states” have had some awareness of the world restored, at least temporarily.
In 2016, a team at UCLA announced they had focused ultrasound pulses at a frequency of 100 Hertz on the thalamus, an area deep inside the brain usually considered to contain the master-switch of consciousness. By the next day the patient had made dramatic improvements, and within a week was communicating, bumping fists with doctors, and attempting to walk. The 25-year-old man, initially in a coma after an accident, and had only shown limited pre-treatment recovery.
Unlikely as it seems that such a dramatic change of fortunes could be a coincidence, the team, led by Professor Martin Monti, urged caution until wider testing could occur. They have now treated three more patients who had shown little to no response to stimulation, achieving encouraging progress in two.
A 56-year-old stroke survivor who had been unable to communicate for 14 months grasped a ball and dropped it on command after two rounds of treatment, Monti and colleagues report in the journal Brain Stimulation. He could also shake or nod his head to give the correct response to basic questions, and respond to hearing a relative’s name by looking at their photograph, rather than an alternative. He even temporarily developed the capacity to bring a bottle to his mouth and to put a pen to paper.
Unfortunately, echoing the film Awakenings and the events on which it was based, by the three-month mark after treatment he had regressed to his original state.
A woman whose heart attack had left her showing almost no signs of consciousness for 2.5 years became able to recognize household objects after a single round of treatment. Her level of consciousness varied in follow-up assessments but was always above her pre-treatment state.
Tripling your successes would be big for anyone, but Monti said in a statement, “I consider this new result much more significant because these chronic patients were much less likely to recover spontaneously than the acute patient we treated in 2016.” Other techniques sometimes manage to restore consciousness to patients in vegetative or minimally conscious states, but the paper notes successes for those with chronic versions of these conditions are rare.
Moreover, Monti noted: “Any recovery typically occurs slowly over several months and more typically years, not over days and weeks, as we show.”
Patients were treated with 10 bursts of 30-second ultrasound, interrupted by 30 seconds without stimulation, repeated a week later. Measurements like blood pressure and oxygen levels were unaffected. Monti intends to create portable ultrasound devices capable of applying the stimulation in patients’ homes but stresses it will take years before safety is confirmed enough for widespread application.
Despite the heartbreak of regression, Monti recounted the first patient’s wife saying, “’This is the first conversation I had with him since the accident.’”
“For these patients, the smallest step can be very meaningful – for them and their families,” Monti said. “To them, it means the world.”
The premise sounds scary, but knowing the odds will help scientists who work on these projects.
Self-teaching AI already exists and can teach itself things programmers don’t “fully understand.”
In a new study, researchers from Germany’s Max Planck Institute for Human Development say they’ve shown that an artificial intelligence in the category known as “superintelligent” would be impossible for humans to contain with competing software.
That … doesn’t sound promising. But are we really all doomed to bow down to our sentient AI overlords?
Berlin’s Institute for Human Development studies how humans learn—and how we subsequently build and teach machines to learn. A superintelligent AI is one that exceeds human intelligence and can teach itself new things beyond human grasp. It’s this phenomenon that causes a great deal of thought and research.
The Planck press release points out superintelligent AIs already exist in some capacities. “[T]here are already machines that perform certain important tasks independently without programmers fully understanding how they learned it,” study coauthor Manuel Cebrian explains. “The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity.”
Mathematicians, for example, use complex machine learning to help solve outliers for famous proofs. Scientists use machine learning to come up with new candidate molecules to treat diseases. Yes, much of this research involves some amount of “brute force” solving—the simple fact that computers can race through billions of calculations and shorten these problems from decades or even centuries to days or months.
➡ Cool Stuff We Love: The Best Books About AI
Artificial Intelligence: An Illustrated HistorySTERLINGamazon.com$24.95$19.69 (21% off)BUY NOWThe Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our WorldBASIC BOOKSamazon.com$13.99BUY NOWHow to Create a Mind: The Secret of Human Thought RevealedPENGUIN GROUPamazon.com$16.39BUY NOWTurning Point: Policymaking in the Era of Artificial IntelligenceBROOKINGS INSTITUTION PRESSamazon.com$24.99$20.49 (18% off)BUY NOWADVERTISEMENT – CONTINUE READING BELOWhttps://306fbe029e52d9531006dbde3b4885f6.safeframe.googlesyndication.com/safeframe/1-0-37/html/container.html
Because of the amount that computer hardware can process at once, the boundary where quantity becomes quality isn’t always easy to pinpoint. Humans are fearful of AI that can teach itself, and Isaac Asimov’s Three Laws of Robotics (and generations of variations on them) have become instrumental to how people imagine we can protect ourselves from a rogue or evil AI. The laws dictate that a robot can’t harm people and can’t be instructed to harm people.RELATED STORYThis Is How Algorithms Will Evolve Themselves
The problem, according to these researchers, is that we likely don’t have a way to enforce these laws or others like them. From the study:
“We argue that total containment is, in principle, impossible, due to fundamental limits inherent to computing itself. Assuming that a superintelligence will contain a program that includes all the programs that can be executed by a universal Turing machine on input potentially as complex as the state of the world, strict containment requires simulations of such a program, something theoretically (and practically) impossible.”
Basically, a superintelligent AI will have acquired so much knowledge that to even plan a large enough container will exceed our human grasp. Not just that, but there’s no guarantee we’ll be able to parse whatever the AI has decided is the best medium. It probably won’t look anything like our humanmade, clumsy programming languages.
This might sound scary, but it’s also extremely important information for scientists to have. Without the phantom of a “failsafe algorithm,” computer researchers can put their energy into other plans and exercise more caution.
Sleep and circadian rhythms have long been associated with the powerful effects of the sun cycle. But in recent years, a growing number of studies have suggested that another familiar celestial body might also be impacting your ability to get a restful night’s sleep: the moon.
A paper published this week in the journal Science Advances found that people tend to have a harder time sleeping in the days leading up to a full moon. Researchers reported that sleep patterns among the study’s 98 participants appeared to fluctuate over the course of the 29½ -day lunar cycle, with the latest bedtimes and least amount of rest occurring on nights three to five days before the moon reaches its brightest phase. They found a similar pattern in sleep data from another group of more than 460 people. Ahead of the full moon, it took people, on average, 30 minutes longer to fall asleep and they slept for 50 minutes less, said Leandro Casiraghi, the study’s lead author and a postdoctoral researcher in the Department of Biology at the University of Washington.
“What we did is we came up with a set of data that shockingly proves that this is real, that there’s an actual effect of the moon on our sleep,” Casiraghi said.
Previous studies examining the moon’s effect on sleep have produced contradictory results. Some research has found minimal or no association between the lunar cycle and sleep, while other studies have demonstrated correlations in controlled settings. The findings of the Jan. 27 paper support existing observations that there is a link, Casiraghi said. But, he noted that the work he and his fellow scientists did is distinct from past research by a critical difference in methodology.AD
“This was real life,” he said, referring to the part of the study that actively monitored participants over lunar cycles. Other studies have primarily focused on retrospective analyses of data from people in sleep laboratories who were being evaluated for different research purposes.
The study involved analyzing the sleep patterns of three Toba Indigenous communities, also known as the Qom people, in northeast Argentina: one rural with no electricity access, a second with limited access and a third located in an urban setting with full access.
Horacio de la Iglesia, one of the study’s co-authors and a professor of biology at the University of Washington, said the communities were “ideal” to study because “they are all ethnically and socioculturally homogeneous, so it has become an outstanding opportunity to address questions about sleep under different levels of urbanization without other confounding effects.”AD
To track sleep, participants were outfitted with wrist monitors that logged activity, and information was gathered over a period of one to two lunar cycles.
At first, the researchers hypothesized that sleep would probably be most affected on the night of the full moon “because you walk out and you see this amazing light,” said de la Iglesia. Exposure to light, though usually in more intense amounts than what the moon generates, is known to have a negative effect on sleep.
“We were after that finding, and we found that that was not the case,” de la Iglesia said.
Instead, the data revealed the unusual pattern of decreased sleep quality on nights leading up to a full moon, a trend that was observed across the three groups.
“When you find what you expected, typically you say, ‘Well, is this really true?’ ” he said. “But when you find something that you did not expect, then you say, ‘Well, this is a real phenomenon.’ ”AD
And the surprises kept coming, Casiraghi said.
Though it was not part of the study’s original plan, the scientists also evaluated sleep data from 464 Seattle-area college students that had been collected for other research. The same trend was observed in that population, Casiraghi said.
“The moment I was, like, just completely in awe was when we [looked] at the data from the students,” he said. “This beautiful lunar rhythm emerged with the exact same shape and phase as our Toba-Qom subjects. … I had to take a couple of days just to do it five times in a row just to know that I was doing the right thing.”
What the data didn’t show, though, was a clear answer to a critical question: Why does this happen?
“The main limitation is that we cannot establish causality,” de la Iglesia said. “We have no idea how the moon is doing this to us.”AD
That didn’t stop the researchers from offering some theories. One suggestion is linked to the changing availability of moonlight as the lunar cycle progresses.
“It turns out that the nights that precede the full moon are the nights that have more moonlight availability on the first half of the night,” de la Iglesia said. The waxing moon not only becomes brighter as it gets closer to a full moon, it also typically rises in the late afternoon or early evening, potentially exposing people to more light. “So if you are awake, you will be kept awake by this availability of moonlight during the evening.”
Observations about when the moon rises ahead of a full moon and the extra light that may become available could “partly explain” the differences in sleep, said Michael Smith, a postdoctoral researcher at the University of Pennsylvania School of Medicine who published a paper in 2014 on moon phases and sleep.AD
But that theory might not work as well when applied to people living in urban environments who are exposed to artificial light at night, which is often more intense than moonlight, said Mark Wu, a professor of neurology, medicine and neuroscience at Johns Hopkins University who did not work on the study. At most, moonlight produces about 0.1 lux, which is “very low,” Wu said, noting that the specific photoreceptor in the human eye that is believed to have a “special, privileged pathway to the circadian system in the brain” detects much higher intensities of light.
“If there’s no light at all, then [moonlight] can be meaningful,” he said. “But with modern lighting, it’s essentially irrelevant.”
The findings from the urban populations prompted researchers to include an “underlying hypothesis” in the paper, de la Iglesia said, suggesting that the sleep pattern might be connected to changes in the moon’s gravitational pull.AD
It’s possible, he said, that gravitational pull from the moon might make people more sensitive to the availability of light in the evening, whether it’s artificial or natural moonlight. But Smith noted that “gravity is actually a pretty weak force overall.”
“Although I’m open to the idea that it could at least partly explain it, I would definitely want to see more evidence,” Smith said.
Casiraghi said the researchers plan to “pursue these avenues on these questions, trying to figure out what’s the force driving these changes.”
Still, de la Iglesia said the study’s findings suggest that the moon’s effect on sleep is “so robust that even if we don’t know the mechanism, we can still capitalize on the finding.”
For people who suffer from insomnia and have trouble falling asleep, de la Iglesia said knowing that your sleep may be worse in the days before a full moon could help you figure out which nights to pay more attention to good sleep hygiene.AD
Sleep experts often recommend reducing exposure to light at night, especially blue light, which arouses the brain, causing delays in sleep onset, and can shorten sleep.
Perhaps beyond practical applications, Casiraghi said, the study’s findings may serve as a reminder of nature’s power.
There is already good evidence that trying to “fight against environmental cycles and trying to counterpose your will to sleep at a different time against the natural cues is actually very bad for your health,” he said. “We now have more evidence that you can’t just get rid of environmental cues.”
Allyson ChiuAllyson Chiu is a reporter focusing on wellness for The Washington Post. She previously worked overnight on The Post’s Morning Mix team.Follow
Using AI and supercomputers, researchers have discovered reoccurring patterns and combinations, known as ‘motifs’, of the four molecular building blocks A, C, G and T, connecting them to gene expression, that is, average amounts of produced proteins. Credit: Pixabay/Chalmers University of Technology
Our genetic codes control not only which proteins our cells produce, but also—to a great extent—in what quantity. This groundbreaking discovery, applicable to all biological life, was recently made by systems biologists at Chalmers University of Technology, Sweden, using supercomputers and artificial intelligence. Their research, which could also shed new light on the mysteries of cancer, was recently published in the scientific journal Nature Communications.
DNA molecules contain instructions for cells for producing proteins. This has been known since the middle of the last century when the double helix was identified as the information carrier of life.
But the factor that determines what quantity of a certain protein is produced has been unclear. Measurements have shown that a single cell can contain anything from a few molecules of a given protein, up to tens of thousands.
With this new research, the understanding of the mechanisms behind this process, known as gene expression, has taken a big step forward. The group of Chalmers scientists have shown that most of the information for quantity regulation is also embedded in the DNA code itself. They have demonstrated that this information can be read with the help of supercomputers and AI.
Comparable to an orchestral score
Assistant Professor Aleksej Zelezniak, of Chalmers’ Department of Biology and Biological Engineering, leads the research group behind the discovery.
“You could compare this to an orchestral score. The notes describe which pitches the different instruments should play. But the notes alone do not say much about how the music will sound,” he explains.
Information for the tempo and dynamics of the music are also required, for example. But instead of written instructions such as allegro or forte in connection with the notation, the language of genetics spreads this information over large areas of the DNA molecule. “Previously, we could read the notes, but not how the music should be played. Now, we can do both,” says Aleksej Zelezniak. “Another comparison could be that now we have found the grammar rules for the genetic language, where perhaps before we only knew the vocabulary.”
But what is the grammar that determines the quantity of gene expression? According to Zelezniak, it takes the form of reoccurring patterns and combinations of the four ‘notes’ of genetics—the molecular building blocks designated A, C, G and T. These patterns and combinations are known as motifs. The crucial factors are the relationships between these motifs—how often they repeat and at exactly which positions in the DNA code they appear.
“We discovered that this information is distributed over both the coding and non-coding parts of DNA—meaning, it is also present in the areas that used to be referred to as junk DNA.”
Using the AI approaches, the researchers uncover regulatory rules that define which DNA motifs must be present together on a gene and at which locations to regulate gene expression across a range of levels from low to high. Previous studies focus just on single motifs in single regulatory regions (marked ‘original motif’), whereas here they expand the view across multiple regulatory regions and multiple motifs (marked ‘additional motifs’). Credit: Jan Zrimec/Chalmers University of Technology
A discovery that applies to all biological life
Although there are other factors that also affect gene expression, according to the study, the information embedded in the genetic code accounts for about 80 percent of the process. The researchers tested the method in seven model organisms, including yeast, bacteria, fruit flies, mice and humans—and found that the mechanism is the same. The discovery they have made is universal, valid for all biological life.
According to Zelezniak, the discovery would have not been possible without access to state-of-the-art supercomputers and AI. The research group conducted huge computer simulations both at Chalmers University of Technology and other facilities in Sweden. “This tool allows us to look at thousands of positions at the same time, creating a kind of automated examination of DNA. This is essential for being able to identify patterns from such huge amounts of data.”
Jan Zrimec, postdoctoral researcher in the Chalmers group and first author of the study, says, “With previous technologies, researchers had to tell the system which motifs in the DNA code to search for. But thanks to AI, the system can now learn on its own, identifying different motifs and motif combinations relevant to gene expression.”
He adds that the discovery is also due to the fact they were examining a much larger part of DNA in a single sweep than had previously been done.
Applications in the pharmaceutical industry
Aleksej Zelezniak believes that the discovery will generate great interest in the research world, and that the method could become an important tool in several research fields, including genetics and evolutionary research, systems biology, medicine and biotechnology. The new knowledge could also make it possible to better understand how mutations affect gene expression in the cell and therefore, eventually, how cancers arise and function. The applications that could most rapidly be significant for the wider public are in the pharmaceutical industry.
“It is conceivable that this method could help improve the genetic modification of the microorganisms already used today as ‘biological factories’ – leading to faster and cheaper development and production of new drugs,” he speculates.
More information: Jan Zrimec et al, Deep learning suggests that gene expression is encoded in all parts of a co-evolving interacting gene regulatory structure, Nature Communications (2020). DOI: 10.1038/s41467-020-19921-4Provided by Chalmers University of Technology
Talking about the Cybertruck, he mentioned that it would need an even more powerful high-pressure die casting machine with a clamping force of 8,000 tons. That would be necessary to create the rear body of the pickup truck because “you’ve got a long truck bed that’s going to support a lot of load.” Such an HPDC machine is not even listed on IDRA Group’s website.
The mega casting for that rear structure will probably be massive, but it will also not have anything to do with an exoskeleton. The idea for stressed-skin structures is that they are self-sufficient, acting as both the frame and the body of a pickup truck at the same time. Even the exterior panels have a structural function, unlike unibody arrangements.
If Tesla made this rear structure to place it under the exoskeleton, the stressed-skin structure misses its point. It will be like adding a frame to its stainless steel body – if the Cybertruck will still have one. Having a separate structure for the battery pack would make sense, but that would be in the middle of the pickup truck, not at the rear.
Whatever that means, Tesla will not change that at this point. As Musk mentioned, development is complete, and the electric pickup truck‘s deliveries will probably begin in 2021. We’ll have to wait to see what happened to one of its most promising features. With what we have now, it seems it is gone.
If you look at milk’s nutritional profile, there’s no denying that it’s a well-balanced, healthy drink. Milk (that is both vitamin A and D fortified) makes a significant contribution to the U.S. population’s protein and micronutrient intakes, including 7% daily value (DV) magnesium, 10% DV of potassium and zinc, 15% DV of vitamin D and phosphorus, 20% DV of calcium and vitamin A, 40% DV riboflavin, and 50% DV vitamin B12 per one-cup serving. Most notably, three of these nutrients are considered to be shortfall nutrients: nutrients that are under-consumed relative to the estimated average requirement.
While milk is an excellent source of many nutrients, it does come with some drawbacks if you consume the animal protein in higher-than-recommended amounts, which is about 3 servings per day.
We combed through dozens of studies to determine exactly how your body will react to drinking too much milk. Read on, and for more on healthy eating, don’t miss 7 Healthiest Foods to Eat Right Now.1
You may experience serious digestive issues.
Shutterstock
A staggering 65% of adults have a lactose intolerance, according to the National Institutes of Health (NIH). While we all love our ice cream, cheese, and milk, do we really want to eat and drink these dairy beverages in the face of our body actively warning us not to?
Lactose intolerance happens when the body produces less lactase as we age; lactase is the enzyme that breaks down lactose. If individuals with lactose intolerance consume lactose-containing dairy products, they may experience abdominal pain, bloating, flatulence, nausea, and diarrhea beginning 30 minutes to 2 hours later, according to the NIH.
According to a BMJ study, drinking too much milk was linked to an increased risk of cardiovascular disease (CVD) and cancer in women. Specifically, the researchers found that women who drank three glasses of milk or more every day had a nearly doubled risk of death and cardiovascular disease, compared to women who drank less than one glass per day. Additionally, a study published in The Journal of Nutrition found that eating dairy foods increased levels of a compound that’s inversely associated with survival of CVD.3
You may shorten your lifespan.
Shutterstock
In the same BMJ study, drinking too much milk was also related to an increased risk of death and in men and women. Women who consumed three or more glasses of milk per day doubled their risk of death compared to women who drank less than one glass per day. Men increased their risk of early death by about 10 percent when they drank three or more glasses of milk daily. (Related: 20 Signs Your Diet Is Shortening Your Lifespan, According to Science)4
You’ll support muscle growth.
Shutterstock
Protein helps promote satiety in addition to helping maintain lean muscle mass, and milk has 8 grams of protein per serving. According to a Medicine & Science in Sports & Exercise study, when volunteers drank 1.5 cups of whole milk after leg exercises, their bodies were better able to uptake two amino acids—phenylalanine and threonine—which were representative of net muscle protein synthesis.5
You’ll support bone health.
Shutterstock
One glass of milk contains 20% DV of highly bioactive calcium, a mineral essential for bone growth and health. By bioactive, we mean that your body can absorb more of the calcium in milk by volume than it can with other sources of calcium. Studies show that calcium-rich dairy milk, which is also high in other minerals (phosphorus, vitamins, iodine, proteins, and potassium) is beneficial for skeleton growth and bone strength. One review published in Aging Clinical and Experimental Researchnoted that while some studies have shown an increased risk of hip fractures in subjects drinking higher quantities of milk, experts believe there are no proven effects of milk consumption on the risk of hip fractures and that these results were due to methodological issues and recall bias. If you do want to switch your calcium source, you can check out these 20 Best Calcium-Rich Foods That Aren’t Dairy.
Industry is starting to recognise the critical role of augmented reality
(Image credit: Shutterstock)
The speed at which technological innovation is occurring has never been as fast paced as it is today. We can appreciate this now more than ever in both our personal and professional lives. Take, for instance, the rapid advancements in smartphone technology over the past few years and the way in which people now have access to increasingly vast amounts of computing power in such a compact form. Even cars and homes are now ‘smart’, with both connectivity and automation in place to make our lives as easy as possible.
Innovation though has never been as important as it is in 2020. Born out of necessity due to the global pandemic, we are now completely reliant on technology in order to communicate with friends, family and colleagues safely. Thanks to innovation – and the admirable resilience of the cloud – many more of us have been able to work and learn remotely. Gone are the days where the majority of the workforce are required to be physically present in an office or travel on a whim. In a move triggered by the pandemic, many more of us are now accessing all of the tools and online resources we need to do our jobs at home. This has benefited both employers and employees alike, resulting in a boost in productivity, as time is not wasted commuting to a physical office.
Across industrial work environments, notable advancements in technological innovation are taking place too. ‘Industry 4.0’ has been a broad catalyst in bringing about an industrial digital revolution, and a driver for companies to accelerate their digital transformation plans. There are numerous considerations to reflect on when looking at trends in Industry 4.0; ensuring that workers themselves can benefit from future-proof technology to not only do their job better, but also to be safer, is imperative. Wearable computing has now bridged that gap.
Augmented Reality (AR) and wearables may still seem like buzzwords to most people, but there has been a shift from hype to reality for those using the technology. Industry is starting to recognise the critical role of this technology in their digital transformation strategies as well as the advantages of this technology; from training and simulation, to leveraging AR for workers in Oil & Gas to working offshore. In these scenarios, wearables ensure that users can maintain full situational awareness. This is a quality that is essential in potentially hazardous work environments, where technology needs to be hands-free and not obstructive in any way.RECOMMENDED VIDEOS FOR YOU…https://imasdk.googleapis.com/js/core/bridge3.436.0_en.html#goog_1156255847Volume 0% PLAY SOUND
Upcoming technologies, such as 5G, will also be deployed in industrial settings and are already starting to usher in a host of changes to prepare companies for the digital world. Due to the pandemic, digital transformation plans have been fast tracked with companies digitising rapidly in order to keep essential operations functioning, or even leveraging the disruption to optimise their operations further.
This year, many industries had to rapidly adapt in response to lockdown scenarios, given the requirement to maintain social distance. In such instances, wearable computing has become a necessity in its own right in order to enable workers to continue to operate safely, whilst limiting the number of people required to be physically present on site. Companies that have since adopted this technology have empowered their workers out in the field to connect with colleagues and experts at any given moment, thus ensuring business continuity for them and the customers they serve. An expert, for instance, can now be based remotely or from home as opposed to being on site, with wearables offering the ability to experience an operator’s surroundings via a live video feed and audio. This mitigates travel time, costs, environmental impact and ensures that people are kept safely apart where possible.
And safety of course is goal number one in industrial settings. There are a number of health and safety aspects to consider when working in an industrial environment such as physical hazards including activities that pose health risks. Extreme temperatures, poor air quality, dangerous chemicals, excessive noise and radiation also need to be managed very carefully. Wearable computing offers the industry the opportunity to use cutting-edge technology to step-change the way organisations protect people and assets, improve workforce capability and operational efficiency, while optimising safety. The importance of wearable technology being hands-free is paramount when it comes to operational safety. Advancements in speech and voice recognition have meant that wearables can be operated in extremely noisy industrial settings and without these advancements the technology would simply be ineffective and its benefits negated.
However, for organisations looking to deploy wearable technology, they need to be aware of a few challenges. Like any new technology, companies need to allow time for adjustments and setup post-deployment. It’s also very much a learning curve in terms of understanding how best to utilise the technology and, as with any new technology introduction, change management will be a factor, too.
It seems like these challenges are relatively easy to overcome, as we are already seeing new use cases for wearable technology surfacing on an almost daily basis. One particularly good example of the benefits of AR is from Vestas Wind Systems AS, a Danish manufacturer, seller, installer, and servicer of wind turbines. Since deploying the technology, Vestas has been able to continue to ramp up its new modular product development platform despite its highly skilled workers and engineers being unable to travel due to lockdown restriction. Vestas has also been able to address the changing workforce in the wind industry; 30% of which is set to retire in the next 10 years. To address this, it is deploying wearable technology to facilitate knowledge transfer and step-by-step technical work instructions to the field.
The rise in Industry 4.0 applications and Internet of Things (IoT) that integrate AR will only serve to drive further adoption of solutions like wearable technology. The next wave of innovation and new use cases will occur with the widespread roll-out of 5G and further advancements in cloud services, which will accelerate or complement organisations’ digital transformation plans. Looking forward to what the future holds, as the digitisation of traditional manufacturing and industrial practices continues, we expect to see that wearable technology will become more commonplace in industrial environments and will be deemed a necessity for use across many more applications.
This article or excerpt is included in the GLP’s daily curated selection of ideologically diverse news, opinion and analysis of biotechnology innovation.In recent months, even as our attention has been focused on the coronavirus outbreak, there have been a slew of scientific breakthroughs in treating diseases that cause blindness.
Researchers at U.S.-based Editas Medicine and Ireland-based Allergan have administered CRISPR for the first time to a person with a genetic disease. This landmark treatment uses the CRISPR approach to a specific mutation in a gene linked to childhood blindness. The mutation affects the functioning of the light-sensing compartment of the eye, called the retina, and leads to loss of the light-sensing cells.
I am an ophthalmology and visual sciences researcher, and am particularly interested in these advances because my laboratory is focusing on designing new and improved gene therapy approaches to treat inherited forms of blindness.
The eye as a testing ground for CRISPR
Gene therapy involves inserting the correct copy of a gene into cells that have a mistake in the genetic sequence of that gene, recovering the normal function of the protein in the cell. The eye is an ideal organ for testing new therapeutic approaches, including CRISPR. That is because the eye is the most exposed part of our brain and thus is easily accessible.
The second reason is that retinal tissue in the eye is shielded from the body’s defense mechanism, which would otherwise consider the injected material used in gene therapy as foreign and mount a defensive attack response. Such a response would destroy the benefits associated with the treatment.
Luxturna costs $425,000 per eye. Credit: Spark Therapeutics
This form of Leber congenital amaurosis is caused by mutations in a gene that codes for a protein called RPE65. The protein participates in chemical reactions that are needed to detect light. The mutations lessen or eliminate the function of RPE65, which leads to our inability to detect light – blindness.
The treatment method developed simultaneously by groups at University of Pennsylvania and at University College London and Moorefields Eye Hospital involved inserting a healthy copy of the mutated gene directly into the space between the retina and the retinal pigmented epithelium, the tissue located behind the retina where the chemical reactions takes place. This gene helped the retinal pigmented epithelium cell produce the missing protein that is dysfunctional in patients.
Although the treated eyes showed vision improvement, as measured by the patient’s ability to navigate an obstacle course at differing light levels, it is not a permanent fix. This is due to the lack of technologies that can fix the mutated genetic code in the DNA of the cells of the patient.
A new technology to erase the mutation
Lately, scientists have been developing a powerful new tool that is shifting biology and genetic engineering into the next phase. This breakthrough geneediting technology, which is called CRISPR, enables researchers to directly edit the genetic code of cells in the eye and correct the mutation causing the disease.
Children suffering from the disease Leber congenital amaurosis Type 10 endure progressive vision loss beginning as early as one year old. This specific form of Leber congenital amaurosis is caused by a change to the DNA that affects the ability of the gene – called CEP290 – to make the complete protein. The loss of the CEP290 protein affects the survival and function of our light-sensing cells, called photoreceptors.
One treatment strategy is to deliver the full form of the CEP290 gene using a virus as the delivery vehicle. But the CEP290 gene is too big to be cargo for viruses. So another approach was needed. One strategy was to fix the mutation by using CRISPR.Related article: Is Europe ‘turning its back’ on reason and science?
The scientists at Editas Medicine first showed safety and proof of the concept of the CRISPR strategy in cells extracted from patient skin biopsy and in nonhuman primate animals.
These studies led to the formulation of the first ever in human CRISPR gene therapeutic clinical trial. This Phase 1 and Phase 2 trial will eventually assess the safety and efficacy of the CRISPR therapy in 18 Leber congenital amaurosis Type 10 patients. The patients receive a dose of the therapy while under anesthesia when the retina surgeon uses a scope, needle and syringe to inject the CRISPR enzyme and nucleic acids into the back of the eye near the photoreceptors.Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.SIGN UP
To make sure that the experiment is working and safe for the patients, the clinical trial has recruited people with late-stage disease and no hope of recovering their vision. The doctors are also injecting the CRISPR editing tools into only one eye.
A new CEP290 gene therapy strategy
An ongoing project in my laboratory focuses on designing a gene therapy approach for the same gene CEP290. Contrary to the CRISPR approach, which can target only a specific mutation at one time, my team is developing an approach that would work for all CEP290 mutations in Leber congenital amaurosis Type 10.
Gene therapy that involves CRISPR promises a permanent fix and a significantly reduced recovery period. A downside of the CRISPR approach is the possibility of an off-target effect in which another region of the cell’s DNA is edited, which could cause undesirable side effects, such as cancer. However, new and improved strategies have made such likelihood very low.
Although the CRISPR study is for a specific mutation in CEP290, I believe the use of CRISPR technology in the body to be exciting and a giant leap. I know this treatment is in an early phase, but it shows clear promise. In my mind, as well as the minds of many other scientists, CRISPR-mediated therapeutic innovation absolutely holds immense promise.
An infrared image of a man and a dog. German and Swiss researchers have shown that they can endow living mice with this type of vision. Credit: Joseph Giacomin
More ways to tackle blindness
In another study just reported in the journal Science, German and Swiss scientists have developed a revolutionary technology, which enables mice and human retinas to detect infrared radiation. This ability could be useful for patients suffering from loss of photoreceptors and sight.
The researchers demonstrated this approach, inspired by the ability of snakes and bats to see heat, by endowing mice and postmortem human retinas with a protein that becomes active in response to heat. Infrared light is light emitted by warm objects that is beyond the visible spectrum.
The heat warms a specially engineered gold particle that the researchers introduced into the retina. This particle binds to the protein and helps it convert the heat signal into electrical signals that are then sent to the brain.
In the future, more research is needed to tweak the ability of the infrared sensitive proteins to different wave lengths of light that will also enhance the remaining vision.
This approach is still being tested in animals and in retinal tissue in the lab. But all approaches suggest that it might be possible to either restore, enhance or provide patients with forms of vision used by other species.
Hemant Khanna is an Associate Professor of Ophthalmology at the University of Massachusetts Medical School. His lab investigates molecular and cell biological bases of severe photoreceptor degenerative disorders, such as Retinitis Pigmentosa (RP) and Leber Congenital Amaurosis (LCA). Find Hemant on Twitter @khannacilialab
A version of this article was originally published at the Conversation and has been republished here with permission. The Conversation can be found on Twitter @ConversationUS
A team of researchers from MIT and Massachusetts General Hospital recently published a study linking social awareness to individual neuronal activity. To the best of our knowledge, this is the first time evidence for the ‘theory of mind‘ has been identified at this scale.
Measuring large groups of neurons is the bread-and-butter of neurology. Even a simple MRI can highlight specific regions of the brain and give scientists an indication of what they’re used for and, in many cases, what kind of thoughts are happening. But figuring out what’s going on at the single-neuron level is an entirely different feat.https://imasdk.googleapis.com/js/core/bridge3.436.0_en.html#goog_194160670000:00 of 04:24Volume 0%00:0104:24
Here, using recordings from single cells in the human dorsomedial prefrontal cortex, we identify neurons that reliably encode information about others’ beliefs across richly varying scenarios and that distinguish self- from other-belief-related representations … these findings reveal a detailed cellular process in the human dorsomedial prefrontal cortex for representing another’s beliefs and identify candidate neurons that could support theory of mind.
In other words: the researchers believe they’ve observed individual brain neurons forming the patterns that cause us to consider what other people might be feeling and thinking. They’re identifying empathy in action.
This could have a huge impact on brain research, especially in the area of mental illness and social anxiety disorders or in the development of individualized treatments for people with autism spectrum disorder.
Perhaps the most interesting thing about it, however, is what we could potentially learn about consciousness from the team’s work.
The researchers asked 15 patients who were slated to undergo a specific kind of brain surgery (not related to the study) to answer a few questions and undergo an simple behavioral test. Per a press release from Massachusetts General Hospital:
Micro-electrodes inserted in the dorsomedial prefrontal cortex recorded the behavior of individual neurons as patients listened to short narratives and answered questions about them. For example, participants were presented with this scenario to evaluate how they considered another’s beliefs of reality: “You and Tom see a jar on the table. After Tom leaves, you move the jar to a cabinet. Where does Tom believe the jar to be?”
The participants had to make inferences about another’s beliefs after hearing each story. The experiment did not change the planned surgical approach or alter clinical care.
The experiment basically took a grand concept (brain activity) and dialed it in as much as possible. By adding this layer of knowledge to our collective understanding of how individual neurons communicate and work together to emerge what’s ultimately a theory of other minds within our own consciousness, it may become possible to identify and quantify other neuronal systems in action using similar experimental techniques.
It would, of course, be impossible for human scientists to come up with ways to stimulate, observe, and label 100 billion neurons – if for no other reason than the fact it would take thousands of years just to count them much less watch them respond to provocation.
Luckily, we’ve entered the artificial intelligence age and if there’s one thing AI is good at it’s doing really monotonous things, such as labeling 80 billion individual neurons, really quickly.
It’s not much of a stretch to imagine the Massachusetts team’s methodology being automated. While it appears the current iteration requires the use of invasive sensors – hence the use of volunteers who were already slated to undergo brain surgery – it’s certainly within the realm of possibility that such fine readings could be achieved with an external device one day.
The ultimate goal of such a system would be to identify and map every neuron in the human brain as it operates in real time. It’d be like seeing a hedge maze from a hot air balloon after an eternity lost in its twists.
This would give us a god’s eye view of consciousness in action and, potentially, allow us to replicate it more accurately in machines.