https://www.genengnews.com/news/sleep-timing-disrupts-brains-waste-disposal-may-increase-risk-of-neurological-disorders/

Sleep Timing Disrupts Brain’s Waste Disposal, May Increase Risk of Neurological Disorders

September 3, 20200

Source: pixabay.com Share

The results of studies by researchers at the University of Rochester Medical Center (URMC) have outlined how the complex set of molecular and fluid dynamics that comprise the glymphatic system—the brain’s self-contained waste removal process—are synchronized with the master internal clock that regulates the body’s sleep-wake cycle. The findings suggest that people who rely on sleeping during daytime hours are at greater risk or developing neurological disorders.

The study data have also provided new insights into the function of the glymphatic system, which was discovered in 2012 by researchers in the lab of neuroscientist Maiken Nedergaard, MD, co-director of the Center for Translational Neuromedicine at URMC. “These findings show that glymphatic system function is not solely based on sleep or wakefulness, but by the daily rhythms dictated by our biological clock,” said Nedergaard. The studies are published in Nature Communications, in a paper titled, “Circadian control of brain glymphatic and lymphatic fluid flow.”

Sleep is an “evolutionarily conserved biological function, clearing the brain of harmful metabolites such as amyloid beta, which build up during wakefulness, and consolidating memory,” the authors wrote. Acute sleep deprivation has been shown to impair cognitive function, and sleep disruption is often associated with neurodegenerative diseases. “Sleep quality is highly dependent on both sleep deficit and sleep timing controlled by circadian rhythms, 24-hour cycles in gene transcription, cell signaling, physiology, and behavior,” they continued. “Understanding how circadian rhythms contribute to sleep quality is necessary to promote long-term brain health.”

The glymphatic system consists of a network of plumbing that follows the path of blood vessels and pumps cerebrospinal fluid (CSF) through brain tissue, washing away waste. Since its discovery by the Nedergaard team, studies have shown that the glymphatic system primarily functions while we sleep. More recent research has shown the role that blood pressure, heart rate, circadian timing, and depth of sleep play in the glymphatic system’s function, and the chemical signaling that occurs in the brain to turn it on and off. Studies have also indicated that disrupted sleep or trauma can cause the glymphatic system to break down and allow toxic proteins to accumulate in the brain, potentially giving rise to a number of neurodegenerative diseases, such as Alzheimer’s disease. However, the team noted, “Although the cardiovascular system and sleep, two main influencers of the glymphatic system, are tightly regulated by circadian rhythms, it remains unknown whether glymphatic fluid movement is under circadian control.”

The link between circadian rhythms and the glymphatic system is the subject of the newly reported research. Circadian rhythms—essentially a 24-hour internal clock that regulates several important functions, including the sleep-wake cycle—are maintained in a small area of the brain called the suprachiasmatic nucleus. The reported research demonstrated that when mice were anesthetized all day long, their glymphatic system still only functioned during the animals’ typical rest period. Mice are nocturnal, so their sleep-wake cycle is the opposite of humans. “ … we show the difference in glymphatic function is not solely based on arousal state, but exhibits a daily rhythm that peaks mid-day when mice are mostly likely to sleep,” the authors noted. “ … We here show glymphatic influx and clearance exhibit endogenous, circadian rhythms peaking during the mid-rest phase of mice.”

“Circadian rhythms in humans are tuned to a day-wake, night-sleep cycle,” explained Lauren Hablitz, PhD, first author of the study and a research assistant professor in the URMC Center for Translational Neuromedicine. “Because this timing also influences the glymphatic system, these findings suggest that people who rely on cat naps during the day to catch up on sleep or work the night shift may be at risk for developing neurological disorders. In fact, clinical research shows that individuals who rely on sleeping during daytime hours are at much greater risk for Alzheimer’s and dementia along with other health problems.”

The team further noted, “Although glymphatic function has yet to be studied in models of circadian disruption, such as in shiftwork, it has been established that shift workers are at increased risk for neurodegenerative disorders, cardiovascular disease, and exhibit increased markers of systemic inflammation.”

The study singled out cells called astrocytes, which play multiple functions in the brain. It is believed that astrocytes in the suprachiasmatic nucleus help to regulate circadian rhythms. Astrocytes also serve as gatekeepers that control the flow of CSF throughout the central nervous system. The results of the Nedergaard team’s study suggest that communication between astrocytes in different parts of the brain may share the common goal of optimizing the glymphatic system’s function during sleep.

The researchers in addition found that during wakefulness, the glymphatic system diverts CSF to lymph nodes in the neck. Because the lymph nodes are key waystations in the regulation of the immune system, the findings suggest that CSF may represent a “fluid clock” that helps wake up the body’s infection-fighting capabilities during the day.

“Understanding how these rhythms, all with different timing and biological functions, interact to affect glymphatic function and lymphatic drainage may help prevent morbidity associated with circadian misalignment,” they commented.

“Establishing a role for communication between astrocytes and the significant impacts of circadian timing on glymphatic clearance dynamics represent a major step in understanding the fundamental process of waste clearance regulation in the brain,” said Frederick Gregory, PhD, program manager for the Army Research Office, which helped fund the research and is an element of the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory. “This knowledge is crucial to developing future countermeasures that offset the deleterious effects of sleep deprivation and addresses future multi-domain military operation requirements for soldiers to sustain performance over longer periods without the ability to rest.”

https://futurism.com/the-byte/psychophysicists-brain-conscious

PSYCHOPHYSICISTS: YOUR BRAIN MIGHT NOT BE AS CONSCIOUS AS YOU THINK

6 HOURS AGO__DAN ROBITZSKI__FILED UNDER: HARD SCIENCE

NPCs

A team of scientists thinks they’ve finally arrived at a model of how consciousness works in the human mind — and in doing so, may have settled a 1,500-year-old debate.

The big issue is whether consciousness is continuous or discrete: Basically, scientists and philosophers have long argued over whether we’re conscious all the time or only during concise moments. In an opinion piece published Thursday in the journal Trends in Cognitive Sciences, the scientists say it’s a little bit of both — and their verdict could free scientists of various disciplines up to do their work without butting heads.

And/Or

The scientists, all psychophysicists at Switzerland’s Ecole Polytechnique Fédérale de Lausanne (EPFL), said that there’s a two-step process going on. While our brains are continuously processing information behind the scenes in more of an “unconscious” manner, we’re only actively conscious of that information during discrete moments.

“Conscious processing is overestimated,” lead author Michael Herzog said in a press release. “You should give more weight to the dark, unconscious processing period. You just believe that you are conscious at each moment of time.”

Autopilot

When we ride a bike, Herzog mused, our bodies automatically make minute adjustments to keep from falling over without consciously thinking about it. But even with his team’s two-step model, some of the secondary questions surrounding the ancient debate remain. Questions about how long these discrete moments of consciousness last, or how they differ among people, don’t have answers.

“The question for what consciousness is needed and what can be done without conscious? We have no idea,” Herzog said.

READ MORE: Is consciousness continuous or discrete? Maybe it’s both, argue researchers [Cell Press]

More on consciousness: Artificial Consciousness: How To Give A Robot A Soul

PEER REVIEWED CONTENT FROM PARTNERS WE LOVE

  1. Review of Thinking Like a Political Scientist: A Practical Guide to Research MethodsColin M. Brown, Journal of Political Science Education, 2017
  2. Deciding about disclosureMo Hutchison, Mental Health and Social Inclusion
  3. Educational ‘accountability’ and the violence of capital: a Marxian readingNoah De Lissovoy et al., Journal of Education Policy, 2010
  1. Willy Russell and Elaine Morgan: inspirational voicesDavid Law, Perspectives: Policy and Practice in Higher Education, 2017
  2. Secukinumab versus adalimumab for treatment of active psoriatic arthritis (EXCEED): a double-blind, parallel-group, randomised, active-controlled, phase 3b trialMcInnes et al., Lancet, 2020
  3. Building Gender Research Capacity for Non-Specialists: Lessons and Best Practices from Gender Short Courses for Agricultural Researchers in Sub-Saharan AfricaMargaret Najjingo Mangheni et al., Advances in Gender Research, 2019

https://www.buckinstitute.org/news/a-metabolite-produced-by-the-body-increases-lifespan-and-dramatically-compresses-late-life-morbidity-in-mice/

A metabolite produced by the body increases lifespan and dramatically compresses late-life morbidity in mice

Middle-aged mice that had the naturally-occurring metabolite alpha-ketaglutarate (AKG) added to their chow had a better “old age.” They were healthier as they aged and experienced a dramatically shorter time of disease and disability before they died, a first for research involving mammals. Results from the double-blinded study, published in Cell Metabolism, were based on clinically-relevant markers of healthspan.

Previous studies show that blood plasma levels of AKG can drop up to 10-fold as we age.

Fasting and exercise, already shown to promote longevity, increase the production of AKG. AKG is not found in the normal diet, making supplementation the only feasible way to restore its levels.

“The standard for efficacy in research on aging is whether interventions actually improve healthspan. We’ve reached that mark here with a compound that is naturally produced by the body and is generally shown to be safe,” said Buck professor and senior author Gordon Lithgow, PhD. Noting that some of the mice did experience moderate lifespan extension (the average was around 12%), measures of healthspan increased more than 40 percent. Lithgow says the goal is always to compress the time of disease and frailty. “The nightmare scenario has always been life extension with no reduction in disability. In this study, the treated middle-aged mice got healthier over time. Even the mice that died early saw improvements in their health, which was really surprising and encouraging.”

AKG is involved in many fundamental physiological processes. It contributes to metabolism, providing energy for cellular processes. It helps stimulate collagen and protein synthesis and influences age-related processes including stem cell proliferation. AKG inhibits the breakdown of protein in muscles, making it a popular supplement among athletes. It also has been used to treat osteoporosis and kidney diseases.

“The mice that were fed AKG showed a decrease in levels of systemic inflammatory cytokines,” said Azar Asadi Shahmirzadi, Pharm.D, PhD, Buck postdoctoral fellow and lead scientist on the study. “Treatment with AKG promoted the production of Interleukin 10 (IL-10) which has anti-inflammatory properties and helps maintain normal tissue homeostasis.  Chronic inflammation is a huge driver of aging. We think suppression of inflammation could be the basis for the extension of lifespan and probably healthspan, and are looking forward to more follow up in this regard.” She also added, “We observed no significant adverse effects upon chronic administration of the metabolite, which is very important.”

Asadi said many of the study results were sex specific, with female mice generally faring better than males. Fur color and coat condition were dramatically improved in the treated females; the animals also saw improvement in gait and kyphosis, a curvature of the spine often seen in aging. The females also saw improvements in piloerection, which involves involuntary contraction of small muscles at the base of hair follicles. “That measure relates to pain and how uncomfortable the animal is,” she said. “The treated animals showed an extended ability to groom themselves.” Asadi said male mice treated with AKG were better able to maintain muscle mass as they aged, had improvements in gait and grip strength, less kyphosis and exhibited fewer tumors and better eye health. 

Researchers say the consistent longevity effects of AKG in yeast, C. elegans, and now mice, show that the metabolite is affecting an evolutionary conserved aging mechanism which is likely to be translational to humans. A clinical trial of AKG involving 45 to 65 year olds is being planned at the Centre for Healthy Longevity at the National University of Singapore (NUS).  “This trial will look at the epigenetic clock as well as standard markers of aging, including pulse wave velocity, and inflammation among others,” said Buck professor Brian Kennedy, PhD, who is also the Director of the Centre at NUS and senior co-author of the study. “This opportunity will allow us to go beyond anecdotal evidence. Real clinical data will help inform physicians and consumers eager to improve health within the context of aging.”  

Lithgow says basic research in the nematode worm C. elegans started AKG’s journey to human clinical trials, noting that the first evidence that AKG extended lifespan in the microscopic worm came in 2014. “We tested AKG in distinct strains of the worm in 2017 and determined that treatment hit conserved aging pathways in the animals.  The fact that it is poised to be rigorously tested in humans just a few years later shows how quickly research can move from the lab bench to the clinic. Never underestimate the knowledge that comes from studying this tiny worm.” 

Citation: Alpha-ketoglutarate, an endogenous metabolite, extends lifespan and compresses morbidity in aging mice

DOI: 10.1016/j.cmet 2020.08.044

Other Buck Institute researchers involved in the study include Daniel Edgar, Chen-Yu Liao, Yueh-Mei Hsu, Mark Lucanic, Christopher Wiley, Dong Eun Kim, Rebeccah Riley, Brian Kaplowitz, Garbo Gan, Chisaka Kuehnemann, Dipa Bhaumik and Brian K. Kennedy.

This work was supported by The Weldon Foundation and by Ponce de Leon Health, along with  funding from the Larry L. Hillblom Foundation, Navigage Foundation, the Hearst Foundation and  NIA Grant U01AG045844), and NIA Grant R01AG051729.

DECLARATION OF INTERESTS

G.J.L. and M.L. are co-founders of Gerostate Alpha, a company aimed atdeveloping drugs for aging, and are shareholders in Ponce de Leon Health.

D.E. and Azar Asadi Shahmirzadi are shareholders in Ponce de Leon Health.

B.K.K. is a board member and equity holder at Ponce de Leon Health. G.J.L.,

B.K., M.L., D.E., and Azar Asadi Shahmirzadi are named inventors on a preliminary

https://cloud.google.com/blog/products/ai-machine-learning/ai-and-machine-learning-news-from-google-cloud

Empowering teams to unlock the value of AI

GCP AIML 2.jpgAndrew MooreHead of Google Cloud AI & Industry SolutionsSeptember 1, 2020

Try GCP

Start building on Google Cloud with $300 in free credits and 20+ always free products.FREE TRIAL

As we kick off Cloud AI week at Google Cloud Next: OnAir, you will hear from customers at all stages of the AI journey who are using our tools and solutions to fundamentally change how their businesses are run.

From Etsy, which exemplifies the new era of scaling a business, to deluged government agencies like the Illinois Department of Employment Security to HSBC, one of the largest banks in the world—organizations in every industry are using our Cloud AI services to solve problems and innovate. 

Here are some of the new capabilities in our Cloud AI portfolio driving customer success.

Tools for everyone on your team

We have made it easier for your entire team—from developers, to data scientists and ML engineers—to apply AI to your business.

Even for the ML experts, the long-term success of ML projects hinges on making the jump from science project and analysis to repeatable, scalable operations. Often, analyst teams will hack together an activation process that can be extremely manual and error-prone with too many parameters, decoupled workflow dependencies, and security vulnerabilities. In fact, an entire discipline called MLOps has emerged to solve this issue by operationalizing machine learning workflows.

To improve the MLOps experience, we’re pre-announcing: Prediction backend GA, Managed Pipelines, Metadata, Experiments, and Model Evaluation. These features—part of AI Platform—provide automation and monitoring at all steps of ML system construction, including integration, testing, releasing, deployment and infrastructure management. Read more about MLOps in Key requirements for an MLOps foundation.

One company that has been pulling all of this together using AI to build a more curated shopping experience for their customers is Etsy. Their marketplace includes more than 65 million seller-generated listings. They’re using AI to build sophisticated workflows to help buyers find exactly what they’re searching for, and to deliver enriched recommendations that better reflect their buyers’ unique styles and tastes.

“The sky’s the limit when it comes to our ability to innovate and improve our marketplace, as we leverage the strength and efficiencies we’ve gained through our partnership with Google Cloud,” said Mike Fisher, CTO, Etsy. 

More improvements for MLOps practitioners include: Vizier, now in beta, which auto tunes the hyperparameters of your model to get the best output and AI Platform Notebooks service is now GA.

We are working hard to bring ML tools and frameworks to citizen data scientists and the talent you have today, to accelerate your time to see results. Cloud AI Building Blocks provide access to commonly used models (for vision, translation, speech etc) via APIs. And by the end of September, AI Platform will include AutoML as an integrated function in the workflow. This combines the best of no-code and code-based options to build custom ML models faster and with high quality.

We’re also focused on building industry-specific solutions for application developers that are easy to integrate into existing workflows through partners and are supported with SLAs. The latest of these includes Dialogflow CX, our virtual agent, and Document AI Procure-to-Pay, now in beta (more on both below). 

For many customers, an all-in approach to using cloud services is not an option, which is why we’re extending our AI capabilities to run on-prem. Last week we announced Speech-to-Text On-Prem, the first of our hybrid AI offerings, now generally available. 

Improved experience for your customers

Our expertise and leadership in AI is one of the reasons many organizations choose Google Cloud. We are steadily transfering advancements from Google AI research into cloud solutions that help you create better experiences for your customers. 

An area that best demonstrates this path from Google AI research to cloud solution to customer success is the work we are doing to advance contact centers. Contact Center AI (CCAI) helps speed-up customer requests using virtual agents, helps assist live agents, and offers insights on all your contact center data to improve your customer interactions. 

Telecommunications leader, Verizon, chose CCAI to create intuitive, consistent customer experiences across all its channels. 

“A platform that can handle our scale was critical—in all these areas we saw Google Cloud CCAI excel,” said Shankar Arumugavelu, SVP & CIO at Verizon. 

We’re continuing to invest in CCAI, and today we’re launching more intuitive conversations with Dialogflow CX. This new version of Dialogflow, which is quickly becoming the industry standard for virtual agents, is ideal for companies with large contact centers. It’s designed to support complex (multi-turn) conversations and is truly omnichannel – you build it once and deploy it everywhere – both in your contact center and digital channels. 

“Dialogflow CX brings conversation state management to a whole new level,” said Lukasz Rewerenda, Principal Solutions Architect, Randstad (Netherlands). 

Further improvements to CCAI include: Agent Assist for Chat, a new module for Agent Assist that provides agents with continuous support over “chat” in addition to voice calls, by identifying intent and providing real-time, step-by-step assistance. Read more about the latest updates to CCAI in: Conversational AI drives better customer experiences.

Providing the most value with Deployed AI

Deployed AI is about bridging the expertise gap, which is why we’re investing in a technology stack that takes the risk out of an AI strategy—sparing you the complexity of implementation. We are focused on building functional solutions like Contact Center AI as well as industry-specific solutions like Lending DocAI, now in alpha.

Lending DocAI is a new, specialized solution powered by Document AI for the mortgage industry, that processes borrowers’ income and asset documents to speed-up loan applications—a notoriously slow and complex process. It automates many of the routine document reviews so mortgage borrowers can focus on the more important decisions. Mortgage service provider, Mr Cooper, is a Document AI customer: 

“Google Cloud’s Document AI solution helps us make better decisions across our massive library of mortgage-specific documents. We’ve trained over 130 critical mortgage document labels, classified over 100 million pages and achieved over 92% accuracy on our journey so far,” said Madhavi Vellore, VP Product Management, Mr. Cooper. 

Similarly, Procure-to-Pay DocAI, now in beta, helps companies automate one of their highest volume, and highest value business processes — the procurement cycle. We provide a group of AI-powered parsers – starting with Invoices and Receipts – that take documents in a variety of formats and return cleanly structured data. We’re working closely with customers and partners such as Workday – to address their procure-to-pay cycle.

One of the advantages of Google Cloud’s industry-specific solutions is you get great business results from AI without having to hire an army of AI experts to get there.

Another example of industry-specific capabilities you can integrate into your workflows is the Media Translation API, which provides real-time speech translation from streaming audio data. Chinese smartphone manufacturer OnePlus is using the API in its video chat app across countries, time zones and even languages:

“With Google Cloud’s Media Translation API, we are now able to provide real-time streaming translation for video chat with a simple API integration and ensure our customers feel effortlessly connected with minimal latency,” said Gary Chen, Head of Software Product at OnePlus.

In July, we announced the public beta of Recommendations AI, and we have a deep roadmap of these industry-specific solutions coming soon, including Retail forecasting, Anti-money laundering, Know your customer, Healthcare NLP, Media asset management, Industrial adaptive controls, and Visual inspection.

And there’s more to come

Whether you have a team of ML engineers and data scientists looking for tools and frameworks to operationalize and scale their work, or you want to integrate an AI-powered industry or functional solution to solve a particular business problem, or, you want to apply AI to better serve your customers, we have the breadth and depth in AI and machine learning to fit your needs.

For more info, join us at Google Cloud Next OnAir for Cloud AI week, when Principal Software Engineer Ting Lu and VP of Product Management Rajen Sheth take to the stage to talk about generating value with Cloud AI. All the content is available today!

https://news.google.com/foryou?hl=en-CA&gl=CA&ceid=CA%3Aen

Cannabis store to open this week

First comes accessories, then comes cannabisBrian Thompson
More from Brian Thompson
Published on: September 1, 2020 | Last Updated: September 1, 2020 3:34 PM EDT

Jason Kane, manager of Capturing Eden Cannabis on Queensway West in Simcoe says the store will open Friday, September 4 to sell accessories only, until they get the go-ahead from government to order their cannabis stock. 

A cannabis store is set to open in Simcoe on Friday, but it will be a bit longer before customers will be able to purchase cannabis.

Jason Kane, manager of Capturing Eden Cannabis at 421 Queensway West, said the store would open as an accessory shop first while awaiting the go-ahead to order their cannabis products.

“We have passed all the compliance stages,” Kane stated. “After a few more inspections, (the government) will give us a date when we can order all our product in.”

The Alcohol and Gaming Commission of Ontario website shows the status for Capturing Eden as “Application in Progress”.

The company will open two stores simultaneously in Haliburton and Owen Sound this month. Kane said he is helping the owners secure a location in Tillsonburg where he resides, aiming to open the latter at the end of the year.

The 4,500-square-foot Simcoe store will serve recreational cannabis consumers, selling products only from licensed producers in sealed packages with expiry dates.

Cannabis will be stored in a steel vault room, while 25 cameras in the store and outside will monitor activity. A security guard will check identification upon entry, and anyone who appears to be under the influence will be denied entry.

Kane said more people are getting into edibles and vaporisers because the don’t want to do the traditional rolling and smoking a “joint” that often produces an unpleasant odour.

He added that CBD products can be utilized without giving the high, allowing people to function throughout the day.

“It’s just getting past the stigma,” Kane said. “It’s all about education, and that’s what we’re trying to do here with information people can see and take with them.”

Posters in the store outline the differences in strains of cannabis, the effects when consumed with other substances such as alcohol, and ingesting versus inhaling cannabis.

The store will initially employ three to four people, but once it becomes an official cannabis store, the staff size will increase to between eight and 10.

“We want to make sure our staff – called bud tenders – are well versed in the benefits, so they can ask more questions, get feedback, and make proper recommendations,” Kane said.

Aiming to open on September 4, accessories available for purchase will range from bongs, grinders and papers to candles, and vaporisers.

“We decided that it made more sense to utilize this time to get people familiar with the fact we are coming, to sell accessories to help offset some of the cost, and to promote and educate about the products we carry,” said the store manager.

He said people are surprised when they start reading about the benefits of cannabis use, and are willing to pay a little more for a safe, legal product that has been regulated and produced with quality control.

“Everyone is used to going to the doctor and pharmacy to get painkillers and opiates that people are getting addicted to,” Kane observed. “Now they have something that is more natural, and may be better for you.”

Another cannabis store is set to open in Simcoe though the exact date is unknown. Flowertown Cannabis, a Brampton-based business, plans to open at the Whitehorse Plaza at the south end of town.

https://towardsdatascience.com/labse-language-agnostic-bert-sentence-embedding-by-google-ai-531f677d775f

LaBSE: Language-Agnostic BERT Sentence Embedding by Google AI

How Google AI Pushed the Limits of Multi-lingual Sentence Embeddings to 109 Languages

Rohan Jagtap

Rohan JagtapFollowAug 26 · 5 min read

Image for post
Multilingual Embedding Space via Google AI Blog

Multilingual Embedding Models are the ones that map text from multiple languages to a shared vector space (or embedding space). This implies that in this embedding space, related or similar words will lie closer to each other, and unrelated words will be distant (refer to the figure above).

In this article, we will discuss LaBSELanguage-Agnostic BERT Sentence Embedding, recently proposed in Feng et. al. which is the state of the art in Sentence Embedding.

Existing Approaches

The existing approaches mostly involve training the model on a large amount of parallel data. Models like LASER: Language-Agnostic SEntence Representations and m-USE: Multilingual Universal Sentence Encoder essentially map parallel sentences directly from one language to another to obtain the embeddings. They perform pretty well across a number of languages. However, they do not perform as good as dedicated bilingual modeling approaches such as Translation Ranking (which we are about to discuss). Moreover, due to limited training data (especially for low-resource languages) and limited model capacity, these models cease to support more languages.

Recent advances in NLP suggest training a language model on a masked language modeling (MLM) or a similar pre-training objective and then fine-tuning it on downstream tasks. Models like XLM are extended on the MLM objective, but on a cross-lingual setting. These work great on the downstream tasks but produce poor sentence-level embeddings due to the lack of a sentence-level objective.

Rather, the production of sentence embeddings from MLMs must be learned via fine-tuning, similar to other downstream tasks.

— LaBSE Paper

Language-Agnostic BERT Sentence Embedding

Image for post
Bidirectional Dual Encoder with Additive Margin Softmax and Shared Parameters via LaBSE Paper

The proposed architecture is based on a Bidirectional Dual-Encoder (Guo et. al.) with Additive Margin Softmax (Yang et al.) with improvements. In the next few sub-sections we will decode the model in-depth:

Translation Ranking Task

Image for post
Translation Ranking Task via Google AI Blog

First things first, Guo et. al. uses a translation ranking task which essentially ranks all the target sentences in order of their compatibility with the source. This is usually not ‘all’ the sentences but some ‘K – 1’ sentences. The objective is to maximize the compatibility between the source sentence and its real translation and minimize it with others (negative sampling).

Bidirectional Dual Encoder

Image for post
Dual-Encoder Architecture via Guo et. al.

The dual-encoder architecture essentially uses parallel encoders to encode two sequences and then obtain a compatibility score between both the encodings using a dot-product. The model in Guo et. al. was essentially trained on a parallel corpus for the translation ranking task which was discussed in the previous section.

As far as ‘bidirectional’ is concerned; it basically takes the compatibility scores in both the ‘directions’, i.e. from source to target as well as from target to source. For example, if the compatibility from source x to target yis denoted by ɸ(x_i, y_i)then the score ɸ(y_i, x_i)is also taken into account and the individual losses are summed.

Loss = L + L′

Additive Margin Softmax (AMS)

Image for post
Embedding Space with and without Additive Margin Softmax via Yang et al.

In vector spaces, classification boundaries can be pretty narrow, hence it can be difficult to separate the vectors. AMS suggests introducing a parameter min the original softmax loss to increase the separability between the vectors.

Image for post
AMS via LaBSE Paper

Notice how the parameter m is subtracted just from the positive sample and not the negative ones which is responsible for the classification boundary.

You can refer to this blog if you’re interested in getting a better understanding of AMS.

Cross-Accelerator Negative Sampling

Image for post
Cross-Accelerator Negative Sampling via LaBSE Paper

The translation ranking task suggests using negative sampling for ‘K – 1’ other sentences that aren’t potentially compatible translations of the source sentence. This is usually done by taking sentences from the rest of the batch. This in-batch negative sampling is depicted in the above figure (left). However, LaBSE leverages BERT as its encoder network. For heavy networks like these, it is infeasible to have batch sizes that are large enough to provide sufficient negative samples for training. Thus, the proposed approach leverages distributed training methods to share batches across different accelerators (GPUs) and broadcasting them in the end. Here, all the shared batches are considered as negative samples, and just the sentences in the local batch are considered for positive sampling. This is depicted in the above figure (right).

Pre-Training and Parameter Sharing

Finally, as mentioned earlier, the proposed architecture uses BERT encoders and are pretrained on Masked Language Model (MLM) as in Devlin et. al. and Translation Language Model (TLM) objective as in XLM (Conneau and Lample). Moreover, these are trained using a 3-stage progressive stacking algorithm i.e. an L layered encoder is first trained for L / 4 layers, then L / 2 layers and then finally L layers.

For more on BERT pre-training, you can refer to my blog.

Putting it All Together

LaBSE,

  1. combines all the existing approaches i.e. pre-training and fine-tuning strategies with bidirectional dual-encoder translation ranking model.
  2. is a massive model and supports 109 languages.

Results

Image for post
Average Accuracy (%) on Tatoeba Datasets via Google AI Blog

LaBSE clearly outperforms its competitors with a state of the art average accuracy of 83.7% on all languages.

Image for post
Zero-Shot Setting via Google AI Blog

LaBSE was also able to produce decent results on the languages for which training data was not available (zero-shot).

Fun Fact: The model uses a 500k vocabulary size to support 109 languages and provides cross-lingual support for even zero-shot cases.

Conclusion

We discussed the Language-Agnostic BERT Sentence Embedding model and how pre-training approaches can be incorporated to obtain the state of the art sentence embeddings.

The model is open-sourced at TFHub here.

References

LaBSE Paper: https://arxiv.org/abs/2007.01852

Dual Encoders with AMS: https://www.ijcai.org/Proceedings/2019/746

Dual Encoders and Translation Ranking Task: https://www.aclweb.org/anthology/W18-6317/

XLM Paper: https://arxiv.org/abs/1901.07291BERT: Pre-Training of Transformers for Language UnderstandingUnderstanding Transformer-Based Self-Supervised Architecturesmedium.comAdditive Margin Softmax Loss (AM-Softmax)Understanding L-Softmax, A-Softmax, and AM-Softmaxtowardsdatascience.comTowards Data Science

A Medium publication sharing concepts, ideas, and codes.

Follow

46

Sign up for The Daily Pick

https://www.marthastewart.com/7981140/how-much-sleep-needed-aging

ge. Ultimately, you likely won’t experience the same extended rest as you did in your 20s, notes Dr. Boris Dubrovsky of NYMetroSleep. Here, he explains what to expect from your slumber over time.

Related: A Guide to the Most Common Sleep Disorders

woman waking up in bed

GETTY / FLASHPOP

The idea that older folks need less sleep is false.

“There is no good evidence that older individuals need less sleep, but people over about 60 to 65 seem to produce less sleep,” Dr. Dubrovsky explains, adding that a slightly shorter total sleep time is to be expected during this time. “Another thing that happens is that the continuity of sleep starts to decline and sleep becomes more spread out, with more frequent and longer awakenings. Therefore, it may take longer time in bed to get the same, or slightly lesser amount of sleep.”

You might wake up earlier.

Dr. Dubrovsky notes that while amount of sleep is one quantifier, the timing of rest is another. “With older age, the circadian cycle usually shifts earlier and people wake up early naturally, which may create an impression that older people ‘need’ less sleep,” he continues. “On the other hand, some sleep from the night may be displaced into the day in the form of napping, or going to bed very early, which may create an impression of ‘needing’ more sleep. In fact, both things are two sides of the same coin.”

Bouncing back from a sleep debt becomes more challenging.

A 30-year-old can pull an all-nighter and function passably the next day—or jet travel across the world and be ready for the next adventure upon landing. For a person in their 60s, however, this is virtually impossible. “On the other hand, a 60-year-old can sleep about six hours a night on a regular schedule, perform well during the day, and be up at 6 a.m. on Saturday morning as usual, ready for a weekend trip,” Dr. Dubrovsky counters. “But a 30-year-old—after sleeping six hours during the work week—will most likely crash for nine to 10 hours over the weekend.

Aim for at least six to seven hours of sleep per night.

You’ll notice that the goal nightly tally isn’t much different from the general recommendation. “The ideal amount is roughly the same for older people, maybe slightly lower, at about six-and-a-half to seven hours, instead of about seven-and-a-half to eight for young adults,” he says. “It is best to think of it as two greatly overlapping normal curves, with the ‘younger’ average a bit higher than the ‘older’ one.”

Your activity level greatly impacts the length and quality of your sleep.

Consistent physical activity, as well as spending sufficient time outdoors, are two very important, controllable factors that help maintain quality and quantity of sleep with advancing age,” says Dr. Dubrovsky, noting that physical activity does not need to be strenuous, but consistent. And pay attention to when you decide to get moving, as well. “For older individuals, especially those who develop an uncomfortably early pattern (falling asleep at about 8 to 9 p.m. and waking up at 3 or 4 a.m.), physical activity and outdoor sunlight may work better in the afternoon,” he explains.

https://medicalxpress.com/news/2020-09-circadian-rhythms-brain.html

Circadian rhythms help guide waste from brain

by University of Rochester Medical Center

brain
Credit: Pixabay/CC0 Public Domain

New research details how the complex set of molecular and fluid dynamics that comprise the glymphatic system—the brain’s unique process of waste removal—are synchronized with the master internal clock that regulates the sleep-wake cycle. These findings suggest that people who rely on sleeping during daytime hours are at greater risk for developing neurological disorders.

“These findings show that glymphatic system function is not solely based on sleep or wakefulness, but by the daily rhythms dictated by our biological clock,” said neuroscientist Maiken Nedergaard, M.D., D.M.Sc., co-director of the Center for Translational Neuromedicine at the University of Rochester Medical Center (URMC) and senior author of the study, which appears in the journal Nature Communications.

The findings add to a growing understanding of the operation and function of glymphatic system, the brain’s self-contained waste removal process which was first discovered in 2012 by researchers in the Nedergaard’s lab. The system consists of a network of plumbing that follows the path of blood vessels and pumps cerebrospinal fluid (CSF) through brain tissue, washing away waste. Research a few years later showed that the glymphatic system primarily functions while we sleep.

Since those initial discoveries, Nedergaard’s lab and others have shown the role that blood pressure, heart rate, circadian timing, and depth of sleep play in the glymphatic system’s function and the chemical signaling that occurs in the brain to turn the system on and off. They have also shown how disrupted sleep or trauma can cause the system to break down and allow toxic proteins to accumulate in the brain, potentially giving rise to a number of neurodegenerative diseases, such as Alzheimer’s.

The link between circadian rhythms and the glymphatic system is the subject of the new paper. Circadian rhythms—a 24-hour internal clock that regulates several important functions, including the sleep-wake cycle—are maintained in a small area of the brain called the suprachiasmatic nucleus.

The new study, which was conducted in mice, the researchers showed that when the animals were anesthetized all day long, their glymphatic system still only functioned during their typical rest period—mice are nocturnal, so their sleep-wake cycle is the opposite of humans.

“Circadian rhythms in humans are tuned to a day-wake, night-sleep cycle,” said Lauren Hablitz, Ph.D., first author of the new study and a research assistant professor in the URMC Center for Translational Neuromedicine. “Because this timing also influences the glymphatic system, these findings suggest that people who rely on cat naps during the day to catch up on sleep or work the night shift may be at risk for developing neurological disorders. In fact, clinical research shows that individuals who rely on sleeping during daytime hours are at much greater risk for Alzheimer’s and dementia along with other health problems.

The study singles out cells called astrocytes that play multiple functions in the brain. It is believed that astrocytes in the suprachiasmatic nucleus help regulate circadian rhythms. Astrocytes also serve as gatekeepers that control the flow of CSF throughout the central nervous system. The results of the study suggest that communication between astrocytes in different parts of the brain may share the common goal of optimizing the glymphatic system’s function during sleep.

The researchers also found that during wakefulness, the glymphatic system diverts CSF to lymph nodes in the neck. Because the lymph nodes are key waystations in the regulation of the immune system, the research suggests that CSF may represent a “fluid clock” that helps wake up the body’s infection fighting capabilities during the day.

“Establishing a role for communication between astrocytes and the significant impacts of circadian timing on glymphatic clearance dynamics represent a major step in understanding the fundamental process of waste clearance regulation in the brain,” said Frederick Gregory, Ph.D., program manager for the Army Research Office, which helped fund the research and is an element of the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory. “This knowledge is crucial to developing future countermeasures that offset the deleterious effects of sleep deprivation and addresses future multi-domain military operation requirements for Soldiers to sustain performance over longer periods without the ability to rest.”


Explore furtherNot all sleep is equal when it comes to cleaning the brain


Journal information:Nature CommunicationsProvided by University of Rochester Medical Center

http://people.idsia.ch/~juergen/2010-breakthrough-supervised-deep-learning.html

2010: Breakthrough of supervised deep learning. No unsupervised pre-training. The rest is history.

Jürgen Schmidhuber (9/2/2020)


In 2020, we are celebrating the 10-year anniversary of our publication [MLP1] in Neural Computation (2010) on deep multilayer perceptrons trained by plain gradient descent on GPU. Surprisingly, our simple but unusually deep supervised artificial neural network (NN) outperformed all previous methods on the (back then famous) machine learning benchmark MNIST. That is, by 2010, when compute was 100 times more expensive than today, both our feedforward NNs and our earlier recurrent NNs (e.g., CTC-LSTM for connected handwriting recognition) were able to beat all competing algorithms on important problems of that time. In the 2010s, this deep learning revolution quickly spread from Europe to America and Asia.


Just one decade ago, many thought that deep NNs cannot learn much without unsupervised pre-training, a technique introduced by myself in 1991 [UN0-UN3] and later also championed by others, e.g., [UN4-5] [VID1] [T20]. In fact, it was claimed [VID1] that “nobody in their right mind would ever suggest” to use plain gradient descent through backpropagation [BP1] (see also [BPA-C] [BP2-6] [R7]) to train feedforward NNs (FNNs) with many layers of neurons.

However, in March 2010, our team with my outstanding Romanian postdoc Dan Ciresan [MLP1] showed that deep FNNs can indeed be trained by plain backpropagation for important applications. This neither required unsupervised pre-training nor Ivakhnenko’s incremental layer-wise training of 1965 [DEEP1-2]. By the standards of 2010, our supervised NN had many layers. It set a new performance record [MLP1] on the back then famous and widely used image recognition benchmark called MNIST [MNI]. This was achieved by greatly accelerating traditional multilayer perceptrons on highly parallel graphics processing units called GPUs, going beyond the important GPU work of Jung & Oh (2004) [GPUNN]. A reviewer called this a “wake-up call to the machine learning community.”

Our results set the stage for the recent decade of deep learning [DEC]. In February 2011, our team extended the approach to deep Convolutional NNs (CNNs) [GPUCNN1]. This greatly improved earlier work [GPUCNN]. The so-called DanNet [GPUCNN1] [R6] broke several benchmark records. In May 2011, DanNet was the first deep CNN to win a computer vision competition [GPUCNN5] [GPUCNN3]. In August 2011, it was the first to win a vision contest with superhuman performance [GPUCNN5]. Our team kept winning vision contests in 2012 [GPUCNN5]. Subsequently, many researchers adopted this technique. By May 2015, we had the first extremely deep FNNs with more than 100 layers [HW1] (compare [HW2] [HW3]).

The original successes required a precise understanding of the inner workings of GPUs [MLP1] [GPUCNN1]. Today, convenient software packages shield the user from such details. Compute is roughly 100 times cheaper than a decade ago, and many commercial NN applications are based on what started in 2010 [MLP1] [DL1-4] [DEC].

In this context it should be mentioned that right before the 2010s, our team had already achieved another breakthrough in supervised deep learning with the more powerful recurrent NNs (RNNs) whose basic architectures were introduced over half a century earlier [MC43] [K56]. My PhD student Alex Graves won three connected handwriting competitions (French, Farsi, Arabic) at ICDAR 2009, the famous conference on document analysis and recognition. He used a combination of two methods developed in my research groups at TU Munich and the Swiss AI Lab IDSIA: Supervised LSTM RNNs (1990s-2005) [LSTM0-6] (which overcome the famous vanishing gradient problem analyzed by my PhD student Sepp Hochreiter [VAN1] in 1991) and Connectionist Temporal Classification [CTC] (2006). CTC-trained LSTM was the first RNN to win international contests. Compare Sec. 4 of [MIR] and Sec. A & B of [T20].

That is, by 2010, both our supervised FNNs and our supervised RNNs were able to outperform all other methods on important problems. In the 2010s, this supervised deep learning revolution quickly spread from Europe to North America and Asia, with enormous impact on industry and daily life [DL4] [DEC]. However, it should be mentioned that the conceptual roots of deep learning reach back deep into the previous millennium [DEEP1-2] [DL1-2] [MIR] (Sec. 21 & Sec. 19[T20] (e.g., Sec. II & D).

Finally let me emphasize that the supervised deep learning revolution of the 2010s did not really kill all variants of unsupervised learning. Many are still important. For example, pre-trained language models are now heavily used in the context of transfer learning, e.g., [TR2]. And our active & generative unsupervised NNs since 1990 [AC90-AC20] are still used to endow agents with artificial curiosity [MIR] (Sec. 5 & Sec. 6)—see also a special case of our adversarial NNs [AC90b] called GANs [AC20] [R2] [T20] (Sec. XVII). Unsupervised learning still has a bright future!