https://phys.org/news/2020-06-janus-nanorods-pollutants.html

‘Janus’ nanorods convert light to heat that can destroy pollutants in water

by William Weir, Rice University

‘Janus’ nanorods convert light to heat that can destroy pollutants in water
Engineers from Yale University and Rice University collaborated on the creation of “Janus” gold nanorods, a new type of nanoparticle that can purify water by converting light to heat. Credit: NEWT/Yale University/Rice University

With a new nanoparticle that converts light to heat, a team of researchers has found a promising technology for clearing water of pollutants.

Trace amounts of contaminants such as pesticides, pharmaceuticals and perfluorooctanoic acid in drinking water sources have posed significant health risks to humans in recent years. These micropollutants have eluded conventional treatment processes, but certain chemical processes that typically involve ozone, hydrogen peroxide or UV light have proven effective. These processes, however, can be expensive and energy-intensive.

A new nanoparticle created by Yale University engineers as part of an effort for the Rice-based Nanosystems Engineering Research Center for Nanotechnology-Enabled Water Treatment (NEWT) could lead to technologies that get around those limitations. The particle is described in a study published this week in the Proceedings of the National Academy of Sciences.

NEWT is a national research center established by Rice, Yale and others in 2015, and Yale’s Jaehong Kim, the lead researcher and creator of the new nanoparticle, collaborated on the project with Rice’s Naomi Halas, NEWT’s nanophotonics research leader.

Researchers in several fields have shown interest in gold nanoparticles for their photothermal and photocatalytic properties, which have proven an effective tool for such uses as cancer therapy. They haven’t, though, figured heavily in water purification efforts, partly because of the difficulty of dispersing nanoparticles in water without stabilizing agents that aren’t good for water treatment applications. The NEWT researchers found a way to fix that by designing and synthesizing “Janus” gold nanorods. These nanoparticles, each hundreds of times smaller than the width of a human hair, are half-coated with silica. This design element is critical, since the silica-coated half allows each nanorod to remain separate from the others and suspended in the water.

“We started with gold nanoparticles and then explored a way to stabilize them through various ways,” said Kim, the Henry P. Becton Sr. Professor and Chair of Yale’s Department of Chemical and Environmental Engineering. “So we came up with the Janus design, where we only cover part with the silica. With this partial coating, they get dispersed in water really well, and that’s very useful for this kind of application.”

The nanorods absorb intense levels of light and convert it to heat localized on the surfaces—a process far more efficient than heating the entire volume of water. And because it uses sunlight, the method is low-cost and sustainable. The same part of the nanorod also acts as an electron-transfer catalyst to promote destruction of micropollutants.

“It achieves various functions—in particular by using the solar radiation to produce highly localized heat,” Kim said. “This is the first demonstration of using that particular phenomena for pollutant destruction.”

Halas, Rice’s Stanley C. Moore Professor of Electrical and Computer Engineering and director of Rice’s Laboratory for Nanophotonics, played the key role of elucidating the complex mechanisms of how the photothermal and photocatalytic reactions occur on this unique nanoparticle.

“This is really nanoengineering at its best, a novel nanoparticle designed to solve an important problem in what would otherwise be an impossible environment,” Halas said.

Kim noted that the research is still in its early phase, and more work is needed to scale it up for real-world application, including finding a material less expensive than gold.

NEWT Director Pedro Alvarez, Rice’s George R. Brown Professor of Civil and Environmental Engineering, called the study “a great example of how forefront advances in nanotechnology can pave a new way to solve water challenges.”

“It is also a great example of how researchers in two different fields of study come together under the roof of NEWT to develop highly unconventional ideas to solve difficult problems,” he added.


Explore furtherEngineers boost output of solar desalination system by 50%


More information: Haoran Wei et al. Plasmon-enabled degradation of organic micropollutants in water by visible-light illumination of Janus gold nanorods, Proceedings of the National Academy of Sciences (2020). DOI: 10.1073/pnas.2003362117Journal information:Proceedings of the National Academy of SciencesProvided by Rice University

https://venturebeat.com/2020/06/23/squeezebert-promises-faster-mobile-nlp-while-maintaining-bert-levels-of-accuracy/

SqueezeBERT promises faster mobile NLP while maintaining BERT levels of accuracy

KHARI JOHNSON@KHARIJOHNSON JUNE 23, 2020 9:09 AMGoogle Pixel 3a 3a XLImage Credit: Andro News

Former DeepScale CEO Forrest Iandola left Tesla to focus on NLP research, he told VentureBeat in a phone interview. Computer vision startup DeepScale was acquired by Tesla in fall 2019 for an undisclosed amount. Iandola said he left Tesla because he wants to explore questions beyond autonomous driving and engage with the kind of accidental discovery that comes with broader forms of AI research.

In research circles, Iandola is perhaps best known for his work in computer vision and as lead author of a 2016 paper on SqueezeNet, a model that achieved AlexNet-like levels of image classification accuracy with 50 times fewer parameters.Recommended VideosPowered by AnyClipElon Musk’s Six Million Dollar Man device to get paralyzed walking ‘a year away’PauseUnmuteDuration 1:26/Current Time 0:23Loaded: 37.07% FullscreenUp Nexthttps://imasdk.googleapis.com/js/core/bridge3.392.0_en.html#goog_6176

In his first piece of NLP research since leaving Tesla, he worked with a team that included DeepScale cofounder and UC Berkeley professor Kurt Keutzer and Tesla senior machine learning engineer Albert Shaw. On Monday, they published a paper detailing SqueezeBERT, a mobile NLP neural network architecture that they say is 4.3 times faster than BERT on a Pixel 3 smartphone while achieving accuracy similar to MobileBERT in GLUE benchmark tasks. A key difference between MobileBERT and SqueezeBERT, Iandola told VentureBeat in an interview, is the use of grouped convolutions to increase speed and efficiency, a technique first introduced in 2012.

“[W]e didn’t really change the size of the layers or how many of them there are, but we sort of grouped convolutions. It’s not really sparsity in the sense that you just delete random parameters, but there are blocks of parameters intentionally missing from the beginning of training, and that’s where the speed-up in our case came from,” he said.VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

SqueezeBERT also relies on techniques derived from SqueezeNAS, a neural architecture search (NAS) model developed last year by former DeepScale employees, including Shaw and Iandola.

Iandola said he chose to commit to NLP research because of advances enabled by Transformer-based networks in recent years. He’s also interested in mobile and edge use cases of NLP that can run locally without data leaving a device.

“I guess I’m not completely backing away from doing vision, but I think NLP feels like where computer vision was in maybe 2013, where AlexNet had just happened, and people are going ‘OK, so what are all the things we want to do over again using this new technology?’ And I feel like in some sense, self-attention networks are that big of a disruption to NLP and people are kind of starting over in designing NLP algorithms,” he said.

Since the open source release of BERT in 2017, Transformer-based models and variations of BERT like Facebook’s RoBERTaBaidu’s ERNIE, and Google’s XLNet have achieved state-of-the-art results for language models. A group of experts VentureBeat spoke with last year called advances in NLP a major trend in machine learning in 2019.

SqueezeBERT is the latest piece of research at the convergence of computer vision and NLP. Last week, Facebook and UC Berkeley researchers including Keutzer introduced Visual Transformers for finding relationships between visual concepts. Last month, Facebook AI Research released DETR, the first object detection system created using the Transformer neural network architecture that has been at the forefront of advances in NLP.

One potential next step for SqueezeBERT is to attempt downsampling to cut sentences in the same way computer vision models like EfficientNet or AlexNet cut the height and width of images for speed improvements.

“The notion of treating a sentence like an image that you can upsample or downsample is something that I think could become a popular thing in NLP — we’ll have to see,” Iandola said.

He said SqueezeBERT code will be released for review this summer.

https://singularityhub.com/2020/06/23/in-a-hybrid-neural-circuit-artificial-and-biological-neurons-used-dopamine-to-communicate/

Scientists Used Dopamine to Seamlessly Merge Artificial and Biological Neurons

By Shelly Fan -Jun 23, 202013748https://spkt.io/a/544247

In just half a decade, neuromorphic devices—or brain-inspired computing—already seem quaint. The current darling? Artificial-biological hybrid computing, uniting both man-made computer chips and biological neurons seamlessly into semi-living circuits.

It sounds crazy, but a new study in Nature Materials shows that it’s possible to get an artificial neuron to communicate directly with a biological one using not just electricity, but dopamine—a chemical the brain naturally uses to change how neural circuits behave, most known for signaling reward.

Because these chemicals, known as “neurotransmitters,” are how biological neurons functionally link up in the brain, the study is a dramatic demonstration that it’s possible to connect artificial components with biological brain cells into a functional circuit.

The team isn’t the first to pursue hybrid neural circuits. Previously, a different team hooked up two silicon-based artificial neurons with a biological one into a circuit using electrical protocols alone. Although a powerful demonstration of hybrid computing, the study relied on only one-half of the brain’s computational ability: electrical computing.

The new study now tackles the other half: chemical computing. It adds a layer of compatibility that lays the groundwork not just for brain-inspired computers, but also for brain-machine interfaces and—perhaps—a sort of “cyborg” future. After all, if your brain can’t tell the difference between an artificial neuron and your own, could you? And even if you did, would you care?

Of course, that scenario is far in the future—if ever. For now, the team, led by Dr. Alberto Salleo, professor of materials science and engineering at Stanford University, collectively breathed a sigh of relief that the hybrid circuit worked.

“It’s a demonstration that this communication melding chemistry and electricity is possible,” said Salleo. “You could say it’s a first step toward a brain-machine interface, but it’s a tiny, tiny very first step.”

Neuromorphic Computing

The study grew from years of work into neuromorphic computing, or data processing inspired by the brain.

The blue-sky idea was inspired by the brain’s massive parallel computing capabilities, along with vast energy savings. By mimicking these properties, scientists reasoned, we could potentially turbo-charge computing. Neuromorphic devices basically embody artificial neural networks in physical form—wouldn’t hardware that mimics how the brain processes information be even more efficient and powerful?

These explorations led to novel neuromorphic chips, or artificial neurons that “fire” like biological ones. Additional work found that it’s possible to link these chips up into powerful circuits that run deep learning with ease, with bioengineered communication nodes called artificial synapses.

As a potential computing hardware replacement, these systems have proven to be incredibly promising. Yet scientists soon wondered: given their similarity to biological brains, can we use them as “replacement parts” for brains that suffer from traumatic injuries, aging, or degeneration? Can we hook up neuromorphic components to the brain to restore its capabilities?

Buzz & Chemistry

Theoretically, the answer’s yes.

But there’s a huge problem: current brain-machine interfaces only use electrical signals to mimic neural computation. The brain, in contrast, has two tricks up its sleeve: electricity and chemicals, or electrochemical.

Within a neuron, electricity travels up its incoming branches, through the bulbous body, then down the output branches. When electrical signals reach the neuron’s outgoing “piers,” dotted along the output branch, however, they hit a snag. A small gap exists between neurons, so to get to the other side, the electrical signals generally need to be converted into little bubble ships, packed with chemicals, and set sail to the other neuronal shore.

In other words, without chemical signals, the brain can’t function normally. These neurotransmitters don’t just passively carry information. Dopamine, for example, can dramatically change how a neural circuit functions. For an artificial-biological hybrid neural system, the absence of chemistry is like nixing international cargo vessels and only sticking with land-based trains and highways.

“To emulate biological synaptic behavior, the connectivity of the neuromorphic device must be dynamically regulated by the local neurotransmitter activity,” the team said.

Let’s Get Electro-Chemical

The new study started with two neurons: the upstream, an immortalized biological cell that releases dopamine; and the downstream, an artificial neuron that the team previously introduced in 2017, made of a mix of biocompatible and electrical-conducting materials.

Rather than the classic neuron shape, picture more of a sandwich with a chunk bitten out in the middle (yup, I’m totally serious). Each of the remaining parts of the sandwich is a soft electrode, made of biological polymers. The “bitten out” part has a conductive solution that can pass on electrical signals.

The biological cell sits close to the first electrode. When activated, it dumps out boats of dopamine, which drift to the electrode and chemically react with it—mimicking the process of dopamine docking onto a biological neuron. This, in turn, generates a current that’s passed on to the second electrode through the conductive solution channel. When this current reaches the second electrode, it changes the electrode’s conductance—that is, how well it can pass on electrical information. This second step is analogous to docked dopamine “ships” changing how likely it is that a biological neuron will fire in the future.

In other words, dopamine release from the biological neuron interacts with the artificial one, so that the chemicals change how the downstream neuron behaves in a somewhat lasting way—a loose mimic of what happens inside the brain during learning.

But that’s not all. Chemical signaling is especially powerful in the brain because it’s flexible. Dopamine, for example, only grabs onto the downstream neurons for a bit before it returns back to its upstream neuron—that is, recycled or destroyed. This means that its effect is temporary, giving the neural circuit breathing room to readjust its activity.

The Stanford team also tried reconstructing this quirk in their hybrid circuit. They crafted a microfluidic channel that shuttles both dopamine and its byproduct away from the artificial neurons after they’ve done their job for recycling.

Putting It All Together

After confirming that biological cells can survive happily on top of the artificial one, the team performed a few tests to see if the hybrid circuit could “learn.”

They used electrical methods to first activate the biological dopamine neuron, and watched the artificial one. Before the experiment, the team wasn’t quite sure what to expect. Theoretically, it made sense that dopamine would change the artificial neuron’s conductance, similar to learning. But “it was hard to know whether we’d achieve the outcome we predicted on paper until we saw it happen in the lab,” said study author Scott Keene.

On the first try, however, the team found that the burst of chemical signaling was able to change the artificial neuron’s conductance long-term, similar to the neuroscience dogma “neurons that fire together, wire together.” Activating the upstream biological neuron with chemicals also changed the artificial neuron’s conductance in a way that mimicked learning.

“That’s when we realized the potential this has for emulating the long-term learning process of a synapse,” said Keene.

Visualizing under an electron microscope, the team found that, similar to its biological counterpart, the hybrid synapse was able to efficiently recycle dopamine with timescales similar to the brain after some calibration. By playing with how much dopamine accumulates at the artificial neuron, the team found that they loosely mimic a learning rule called spike learning—a darling of machine learning inspired by the brain’s computation.

A Hybrid Future?

Unfortunately for cyborg enthusiasts, the work is still in its infancy.

For one, the artificial neurons are still rather bulky compared to biological ones. This means that they can’t capture and translate information from a single “boat” of dopamine. It’s also unclear if, and how, a hybrid synapse can work inside a living brain. Given the billions of synapses firing away in our heads, it’ll be a challenge to find-and-replace those that need replacement, and be able to control our memories and behaviors similar to natural ones.

That said, we’re inching ever closer to full-capability artificial-biological hybrid circuits.

“The neurotransmitter-mediated neuromorphic device presented in this work constitutes a fundamental building block for artificial neural networks that can be directly modulated based on biological feedback from live neurons,” the authors concluded. “[It] is a crucial first step in realizing next-generation adaptive biohybrid interfaces.”

Image Credit: Gerd Altmann from Pixabay48

SHELLY FANShelly Xuelai Fan is a neuroscientist-turned-science writer. She completed her PhD in neuroscience at the University of British Columbia, where she developed novel treatments for neurodegeneration. While studying biological brains, she became fascinated with AI and all things biotech. Following graduation, she moved to UCSF to study blood-based factors that rejuvenate aged brains. 

https://www.psypost.org/2020/06/cannabis-study-suggests-women-may-need-less-thc-to-get-to-the-same-effects-as-men-57117

Cannabis study suggests women may need less THC to get to the same effects as men

BY ERIC W. DOLAN Share on Facebook Share on Twitter

SHARE

Women tend to experienced the same acute effects of cannabis as men at a lower dose of THC, according to new research published in Psychopharmacology which sought to mimic real-world smoking practices.

“We know from population survey data that men are more likely to use cannabis than women, but it seems like women experience more severe cannabis-related harms,” said study author Justin Matheson, a PhD candidate at the University of Toronto.

“Research in animals suggests that this is because females are more sensitive to the effects of THC, the primary psychoactive compound in cannabis, and that this might be due to differences in the way THC is metabolized in females. However, there has been relatively little human laboratory evidence to suggest sex differences in the acute effects of THC.

In the double-blind study, 91 healthy cannabis users smoked a single cannabis cigarette (12.5% THC or placebo) before completing subjective effect scales and cognitive tests. The researchers also monitored their vital signs, such as blood pressure and body temperature. The participants used cannabis about 1 to 4 times per week and were 19 to 25 years old.

The researchers found that female participants tended to smoke for just as long a duration as males. However, women tended to smoke less of the cannabis cigarette.

Despite the differences in cannabis consumption, there were no differences in peak subjective drug effects, mood or cognitive effects between men and women.

“We found that women smoked less of a cannabis joint, had lower levels of THC in blood, yet experienced the same acute effects as men. So, I think the main take-away is that women may need a lower dose of THC to get to the same degree of intoxication as men,” Matheson explained.

“What I want to stress here though is that, in our study, participants were able to smoke the amount of cannabis they wanted to. When participants smoke to their desired high, we call this ‘titrating to effect.’ Titrating to effect is possible when smoking cannabis because THC delivery to the brain is very rapid with this route of administration, so users can feel the high as they are still smoking.

“However, with other cannabis products like edibles or beverages that have a delayed onset of action, it is not possible to titrate to effect. In these cases, women are likely at higher risk of experiencing acute harms,” Matheson told PsyPost.

Like all research, the study includes some limitations.

“The major caveat here is that we considered sex as a binary biological variable (male vs. female) and we had no measure of gender. Sex is a biological construct that represents things like sex chromosomes, hormones, anatomy, and physiology, while gender is a social and cultural construct that represents things like our gender identity (male, female, or gender-diverse) and the expectations that our societies have for us based on these identities,” Matheson said.

“Studies like ours represent sort of the first step to observing that there are sex differences in the acute effects of cannabis. But the next step is to see why that is the case, and the answer likely involves both sex and gender. For example, there’s evidence that estrogen (a sex hormone) influences the metabolism of THC, which could explain some of the sex differences in the metabolism of THC we see. But we also know that gender identity influences drug use behaviors, which could relate to why we saw that women smoked less of the cannabis joint.”

“Something important that I think not a lot of people are aware of is that women and female animals have been excluded from biomedical research for much of the history of science. As a result, our understanding of human health and disease is biased towards males. Thankfully, most major funding agencies have adopted policies requiring females to be included in research, and I hope as a scientific community we can further improve on these policies to incorporate more comprehensive measures of sex and gender,” Matheson added.

The study, “Sex differences in the acute effects of smoked cannabis: evidence from a human laboratory study of young adults“, was authored by Justin Matheson, Beth Sproule, Patricia Di Ciano, Andrew Fares, Bernard Le Foll, Robert E. Mann, and Bruna Brands.

https://www.theverge.com/2020/6/23/21300097/fugaku-supercomputer-worlds-fastest-top500-riken-fujitsu-arm

ARM-based Japanese supercomputer is now the fastest in the world

22

Fugaku is being used in COVID-19 researchBy Sam Byford@345triangle  Jun 23, 2020, 1:34am EDT

Share this story

TOPSHOT-JAPAN-SCIENCE-COMPUTERS

A Japanese supercomputer has taken the top spot in the biannual Top500 supercomputer speed ranking. Fugaku, a computer in Kobe co-developed by Riken and Fujitsu, makes use of Fujitsu’s 48-core A64FX system-on-chip. It’s the first time a computer based on ARM processors has topped the list.

Fugaku turned in a Top500 HPL result of 415.5 petaflops, 2.8 times as fast as IBM’s Summit, the nearest competitor. Fugaku also attained top spots in other rankings that test computers on different workloads, including Graph 500, HPL-AI, and HPCG. No previous supercomputer has ever led all four rankings at once.

While fastest supercomputer rankings normally bounce between American- and Chinese-made systems, this is Japan’s first system to rank first on the Top500 in nine years since Fugaku’s predecessor, Riken’s K computer. Overall there are 226 Chinese supercomputers on the list, 114 from America, and 30 from Japan. US-based systems contribute the most aggregate performance with 644 petaflops.Satoshi Matsuoka@ProfMatsuoka

I installed the contact tracing app for #COVID-19 on my Smartphone. Simulation on #Fugaku indicates we need 60% distribution for effectiveness. I encourage people in Japan to install to protect yourself & save lives. It was pro reviewed to be privacy safe. https://itunes.apple.com/jp/app/id1516764458?mt=8 …新型コロナウイルス接触確認アプリ本アプリは日本国内での利用を想定しております。 厚生労働省が公式提供する、新型コロナウイルス陽性登録した人との接触をお知らせするアプリです。 ■ できること 本アプリを端末に設定した人どうしの接触(1m以内、15分以上)を記録します。新型コロナウイルスに陽性と判定されたら本アプリに匿名で登録することができます。最近接触した人の中に陽性登録した人がいたら、通知と適切な行動をお知らせします。 ■…3.95987/5.0 stars – 3,489 ratings22Twitter Ads info and privacySee Satoshi Matsuoka’s other Tweets

Fugaku is set to go into full operation next fiscal year. So far it has been used on an experimental basis to research COVID-19, including diagnostics, simulating the spread of the SARS-CoV-2 virus, and the effectiveness of Japan’s new contact tracing app.

https://www.cnbc.com/2020/06/23/impossible-foods-business-booms-as-demand-for-plant-based-meat-grows.html

CNBC DISRUPTOR 50

Starbucks launches the Impossible Foods Breakfast Sandwich as America’s appetite for plant-based meat grows

PUBLISHED TUE, JUN 23 20205:01 AM EDTUPDATED TUE, JUN 23 20208:24 AM EDTJennifer Elias@JENN_ELIASKEY POINTS

  • Starbucks added the Impossible Breakfast Sandwich, made with Impossible plant-based sausage, to its U.S menu on Tuesday to meet the growing customer interest in plant-based options.
  • Impossible Foods CFO David Lee told CNBC that 9 out of 10 Impossible buyers are traditional meat eaters, bolstered by recent events like the meat shortages from the Covid-19 pandemic and a recent expansion into 1,000 grocery stores.
  • Plant-based food and beverage sales were about $5 billion in 2019 and are expected to have double-digit growth through 2020, according to The Good Food Institute and the Plant-Based Foods Association.
Starbucks Impossible sausage breakfast sandwich.

Starbucks Impossible sausage breakfast sandwich.Source: Starbucks

Impossible Foods is known for its plant-based meat alternatives, but it’s expanding its breadth of products, buoyed by a cultural movement, an unforeseen pandemic and — in true Silicon Valley fashion — science. On Tuesday it announced that its Impossible Breakfast Sandwich has been added to Starbucks’ menus and most  of its locations in the U.S.

With 22g of protein, the Impossible Breakfast Sandwich features savory Impossible sausage made from plants, which is combined with a cage-free fried egg and aged cheddar cheese and served on an artisanal ciabatta bread.

“Starbucks’ commitment to add more plant-based ingredients to its menu is a new benchmark for large corporations,” said Dr. Patrick O. Brown, founder and CEO of Impossible Foods. ADVERTISINGAds by Teads

Michael Kobori, chief sustainability officer at Starbucks, said this is part of the company’s sustainability initiatives and an effort to meet increasing customer demand for plant-based options.

Impossible Foods, which ranks No. 49 on CNBC’s 2020 Disruptor 50 list, makes meat, dairy and fish products from its patented plant-based ingredients that have been backed by celebrities like Katy Perry and Serena Williams, but Impossible Foods CFO David Lee told CNBC it’s trying to position itself away from the plant-based movement so often associated with a niche food industry.All we need is more and more meat eaters to love our product.David LeeIMPOSSIBLE FOODS CEO

“We don’t think of it as an alternative (meat) industry. We think we’re making better meat consumed by the meat eater — competing on the level playing field with a better product,” Lee said when asked about the association to a young plant-based market. “That’s how we define our focus.”

Lee said 9 out of 10 Impossible buyers are traditional meat eaters, bolstered by recent events like the meat shortages from the Covid-19 pandemic and a recent expansion into 1,000 grocery stores. It also boasted a fresh round of $500 million, making its valuation at approximately $1.3 billion.

The company’s Impossible Whoppers are in 7,500 Burger Kings nationwide, but it declined to provide sales numbers. Burger King parent company Restaurant Brands provided an update on sales in May earnings. In the U.S., comparable sales growth at Burger King for the first quarter was negative 6.5%. During the precrisis period, in January, February and the first two weeks of March, it posted positive comparable sales growth in the U.S. in the low single digits, and it highlighted the sales being driven by “continued strong contribution from the Impossible Whopper and improved performance in the value layer of our menu.”

Earlier in 2020, Burger King added the Impossible Whopper to its value menu to reach more diners earlier.

Burger King says there was a big bump in foot traffic after it tested the Impossible Whopper in St. Louis in April 2019.

Burger King says there was a big bump in foot traffic after it tested the Impossible Whopper in St. Louis in April 2019.Impossible Foods

While the restaurant business has been hurt, it hasn’t been as bad as the restaurant industry overall, Impossible Foods said. It also began a direct-to-consumer channel for direct shipment amid the pandemic. 

As it increases its reach to retailers and consumers, it’s aiming for a similar placement as it has with its restaurant partners.

“With Burger King we began to discuss the importance of making sure the Impossible brand was right next to their core brand — the Whopper,” Lee said. “To be able to give meat eaters another great option for breakfast is an important milestone,” he said about a recent new Impossible sausage option for a croissant sandwich on Burger King’s menu. 

Lee said the company has seen an increased grocery store footprint of 18 times since March and expects to see a rise to 50 times more by the end of 2020. And it’s depending on that scale — particularly of converted meat eaters — before it can lower prices to be more comparable to traditional meat. “All we need is more and more meat eaters to love our product,” Lee said.

WATCH NOWVIDEO03:20Impossible Foods CEO on how meat shortages are driving demand for plant-based products

A report by The Good Food Institute and the Plant Based Foods Association said that plant-based food and beverage sales were about $5 billion in 2019, which represented an 11% increase from 2018, with meat alternatives as the largest plant-based food growth. Analysts say they expect to see continued double-digit growth through 2020. 

Boosting exposure to consumers

To this point, Impossible has been focusing on the food-service sector, and Beyond Meat has been focusing on consumers and the grocery sector, analysts told CNBC. But a new direct-to-service channel and new retail partnerships such as with Kroger are likely to boost its exposure to consumers — especially those in rural areas — according to food research analyst Cara Rasch. “A lot of people have tried to do online shopping during the pandemic, and some people are finding they like it and might not have tried it before,” she said. 

“The pandemic made it clear we needed to launch our direct-to-consumer offering much sooner than we thought,” said Khosla Ventures founding partner and early Impossible Foods investor Samir Kaul. “It’s only been a couple weeks, but I think that will be very meaningful.”

WATCH NOWVIDEO03:34Starbucks debuts Impossible Foods’ breakfast sandwich

Lee and Kaul said the company plans to compete with its science-based ingredients and R&D investment. Much of the recent funding will go toward R&D and supply chain, which larger food conglomerates have a leg up on.When asked about whether it can muscle larger companies with supply chain power, such as Nestle, which is bringing its Sweet Earth alternative-protein brand to the U.S., or Kellogg, which launched Incogmeato, Lee said it’s betting on those meat-conversion stats. “We believe that if we bet on the core consumer — the meat eater — then our ability to grow with the supply chain will grow with that demand.”

Kaul said such options by larger manufacturers are still validation. “They can’t say, ‘Oh, this plant-based stuff is niche’ and then announce their own products,” he said. “We’re Silicon Valley tech investors taking on large industries. This is kind of where our bread and butter is,” Kaul said. “I can guarantee you the top Ph.D. scientists are going to want to work at Impossible and not Nestle or Kraft.”

Impossible wouldn’t comment on whether an IPO is in store like it’s competing brand Beyond Meat, which went public last year. Rising traditional beef prices have also positioned companies to capitalize.

But as the market matures, there’s going to be consolidation, according to analysts.

More from Disruptor 50:
Meet the 2020 CNBC Disruptor 50 companies
Ginkgo Bioworks CEO on scaling up Covid-19 testing: ‘If we try, we can win’
How GoodRx built a $2.8 billion business by helping consumers find drug discounts

“A lot of the larger food companies are not only trying to introduce their own brands but they’re trying to acquire these smaller companies,” Rasch said. “Some larger food companies might be interested in trying to purchase Impossible Foods.”

Lee declined to comment on future funding expectations or potential acquisitions, only saying, “We are open to any partner that has our aligned mission, and are open to anyone who can help us achieve our mission and business needs as fast as possible.”

While alternative meat is still considered a young market sector, speediness is a necessary ingredient, according to analysts.

There’s pressure to get wider distribution and grow “as fast as a possible as quickly as possible,” especially after Beyond Meat went public last year and has already moved into global opportunities, according Rob Dickerson, managing director of equity research for food producers at Jefferies.  It’s still too early to say whether the broader meat market will embrace the brands like Impossible, beyond trying it once or twice. “The key is to have consumers try it, like it and then buy more,” he said. 

https://phys.org/news/2020-06-genetic-arabian-horses-common-beliefs.html

Genetic study of Arabian horses challenges some common beliefs about the ancient breed

by Cornell University

Genetic study of Arabian horses challenges some common beliefs about the ancient breed
Credit: Samantha Brooks

A study involving Arabian horses from 12 countries found that some populations maintained a larger degree of genetic diversity and that the breed did not contribute genetically to the modern-day Thoroughbred, contrary to popular thought.

An international team of scientists was led by the University of Florida’s Samantha Brooks, a UF/IFAS assistant professor of animal sciences; Cornell University’s Doug Antczak, the Dorothy Havemeyer McConville Professor of Equine Medicine at the Baker Institute for Animal Health; and Andy Clark, the Jacob Gould Schurman Professor in Cornell’s department of molecular biology and genetics.

The group collected and examined DNA samples from 378 Arabian horses from Qatar, Iran, UAE, Poland, USA, Egypt, Jordan, Kuwait, United Kingdom, Australia, Denmark and Canada. The research, published June 16 in the journal Scientific Reports, was conducted over an 8-year period, beginning in 2014 before Brooks made the move from Cornell to UF. The process was a lot of effort, she said, in part due to traveling to collect the Arabians’ blood and hair samples, as well as natural delays in working with international colleagues to collect and ship other samples.

The samples were anonymized for data analysis purposes, except to note the horse’s location and categorizing them as endurance competition, flat course racing or show horses. The data set was also expanded using information from past studies on other breeds, which included Thoroughbreds, Persian Arabian, Turkemen and Straight Egyptians.

“The Arabian horse has a special mystique due to the long recorded history of the breed,” Brooks said. “Arabian horse breeders, in particular, know their horse’s bloodlines many generations back. What we found was that in the area where this breed originates—likely the near East region, but we don’t know exactly—there’s a healthy level of diversity. This is particularly evident in populations from Bahrain and Syria, which suggests these are some pretty old populations.”

The horse is prized for characteristics like heat tolerance and endurance, as well as its unique appearance, with a dish-shaped facial profile, wide-set eyes, an arched neck and a high tail carriage. It has been exported from its ancestral homeland for centuries, with some modern lineages drawn strictly from these smaller genetic pools, giving the breed a reputation for inbred disorders. While this was true for some groups they tested, Brooks noted, they also found remarkable diversity when considering the breed as a whole.

Brooks contrasted the discovery of more diverse populations with the samples they received from racing Arabians. Another longstanding myth says that the Arabian contributed genetically to the modern Thoroughbred, but the racing Arabians’ DNA told a different story.

“What we found in these samples was not that much Arabian ancestry was part of the Thoroughbred line, but the opposite: that Thoroughbred DNA exists in most of the modern racing Arabian lines, indicating a more recent interbreeding within this group,” Brooks said. “I can’t speculate on the how or why, but this is clearly the story the DNA is telling us.”

Another implication of this study, Brooks said, is the potential to identify the genetic regions that determine some of the Arabian’s unique traits, like their facial profile. This could be expanded to identify the marker for other horse breeds’ head shapes, for example.

The study has a long list of co-authors, with contributors from the University of Tehran, Iran; Weill Cornell Medical College in Qatar; the University of Kentucky; the University of Agriculture in Kraków, Poland; the Hong Kong Jockey Club; the Equine Veterinary Medical Center in Doha, Qatar; and the University of Veterinary Medicine Vienna, Austria. Elissa Cosgrove from the Clark lab and Raheleh Sadeghi, a visiting scientist from Iran in the Antczak lab, shared first co-authorship of the study.

“An exceptional aspect of this project was the wonderful level of open collaboration and sharing of resources by veterinary geneticists, equine scientists, and horsemen from around the world,” Antczak said. “It was a great pleasure to conduct this global study for the benefit of the horse.”


Explore furtherStudy reveals domestic horse breed has third-lowest genetic diversity


More information: Cosgrove, E.J., Sadeghi, R., Schlamp, F. et al. Genome Diversity and the Origin of the Arabian Horse. Sci Rep 10, 9702 (2020). doi.org/10.1038/s41598-020-66232-1Journal information:Scientific ReportsProvided by Cornell University

https://www.forbes.com/sites/johncumbers/2020/06/23/a-new-way-of-making-dna-is-about-to-revolutionize-the-biotech-industry/#2258a95849ff

A New Way Of Making DNA Is About To Revolutionize The Biotech Industry

John CumbersSenior ContributorManufacturingSynthetic biology & space settlement connector, founder and investor.

An illustration of DNA molecules
Has the next biotech unicorn just been minted? Two California firms are focused on a radical new way … [+] GETTY

Has the next biotech unicorn been minted?

A partnership was just announced between two California firms, Codexis CDXS and Molecular Assemblies, focused on a radical new way of writing DNA. The partnership comes at a booming time for the synthetic biology industry, which seeks to use DNA to create everything from COVID-19 antibodies to new options for high-density data storage.

DNA synthesis is already a hot market. Twist Bioscience, a gene maker, has seen its stock almost triple since its IPO in late 2018. In the same year, Integrated DNA Technologies was acquired by Danaher for a rumored $1.8 billion. All of this recent success, however, is based on a decades-old method for printing DNA that insiders admit is quite limited.

READ MORE: Why This Synthetic Biology Stock Has Doubled Since IPO Most Popular In: Manufacturing

Today’s DNA makers still create their products using chemistry. To form a new double helix, the individual letters of DNA — nicknamed A, T, C and G — are linked together using a process called phosphoramidite synthesis. Though the process has been refined over the years, it still requires harsh solvents that limit the quality of the final product. These solvents also become noxious waste.

A quick glance at biology proves that there is clearly a better way. Your own cells make DNA around the clock, and they do so without harsh chemicals. The DNA copies they produce are millions of times longer and considerably more accurate than what any business can currently obtain through chemical synthesis. This power to create massive amounts of high-quality DNA enables the complexity of life, and if harnessed would transform global manufacturing by opening the floodgates to synthetic biology innovations in the pharmaceutical, textile, agriculture and advanced materials sectors.

PROMOTEDCivic Nation BRANDVOICE | Paid ProgramA Message From The Quarantined Class Of 2020Grads of Life BRANDVOICE | Paid ProgramRacial And Economic Justice: Steps In The Right DirectionUNICEF USA BRANDVOICE | Paid ProgramProtecting Kids From the Effects of COVID-19 in Honduras

For years, biotechnologists have yearned to make DNA the same way that biology does — with enzymes. These nano-scale machines, which are the products of genes, specialize in corralling smaller chemicals around them. One type of enzyme in particular, called a polymerase, has an unparalleled ability to stitch DNA letters into long chains.

Businesses like San Diego-based Molecular Assemblies have been adapting polymerase enzymes to create custom DNA molecules. They hold 24 patents on the matter, and say they already have the potential to produce DNA chains that are up to 50 times longer than the competition. They have even begun applying their technology to the challenge of DNA data storage.

Codexis, based in the Bay Area, specializes in improving enzymes for industrial use. As I have written about before, they apply advanced computer software to come up with improved enzymes that can aid in the creation of an astonishing array of products, including cannabinoids, bioplastics, biofuels, and even pharmaceutical drugs.

READ MORE: How To Spot A Synthetic Biology Unicorn

Under the new agreement, Codexis will purchase $1 million in stock from Molecular Assemblies. This is a clear win for both companies: The core business of Molecular Assemblies is based on enzymes, and in Codexis they gain a partner who specializes in enzyme engineering. Codexis benefits as well, as they get a stake in the lucrative and burgeoning field of enzymatic DNA synthesis.

Ultimately, moves like these also benefit the entire synthetic biology industry. DNA made by enzymes would be a boon to any company — big or small — who wishes to take part in the 21st-century effort to build a better, greener and more efficient economy with biology.

A picture of Molecular Assemblies CEO Michael Kamdar, Codexis CEO John Nicols, and SynBioBeta CEO John Cumbers.
For an inside scoop on enzymatic DNA synthesis and what it means for the entire biotechnology … [+] SYNBIOBETA

For the inside scoop, watch my interview with Molecular Assemblies’ CEO Michael Kamdar and Codexis CEO John Nicols about the impact of enzymatic DNA synthesis, and what this deal means to the industry.https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2F0PEOIksxocU&display_name=YouTube&url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D0PEOIksxocU&key=3ce26dc7e3454db5820ba084d28b4935&type=text%2Fhtml&schema=youtube

Follow me on twitter at @johncumbers and @synbiobeta. Subscribe to my weekly newsletters in synthetic biology Thank you to Ian Haydon for additional research and reporting in this article. I’m the founder of SynBioBeta, and some of the companies that I write about—including Molecular Assemblies and Codexis—are sponsors of the SynBioBeta conference and weekly digest. I am also an operating partner at DCVC, which has invested in Molecular Assemblies. Here’s the full list of SynBioBeta sponsors.

https://www.infoq.com/news/2020/06/facebook-ai-transpiler/

Facebook Announces TransCoder AI to Translate Code across Programming Languages

JUN 23, 2020 3 MIN READ

by

Facebook AI Research has announced TransCoder, a system that uses unsupervised deep-learning to convert code from one programming language to another. TransCoder was trained on more than 2.8 million open source projects and outperforms existing code translation systems that use rule-based methods.

The team described the system in a paper published on arXiv. TransCoder is inspired by other neural machine translation (NMT) systems that use deep-learning to translate text from one natural language to another and is trained only on monolingual source data. To compare the performance of the model, the Facebook team collected a validation set of 852 functions and associated unit tests in each of the system’s target languages: Java, Python, and C++. Compared to existing systems, TransCoder performed better on this validation set than existing commercial solutions: by up to 33 percentage points compared to j2py, a Java-to-Python translator. Although the team restricted its work to only those three languages, they claim it can “easily be extended to most programming languages.”

Automated tools for translating source code from one language to another, also known as source-to-source compilers, transcompilers, or transpilers, have existed since the 1970s. Most of these tools work similar to a standard code compiler: they parse the source code into an abstract syntax tree (AST). The AST is then converted back into source code in a different language, usually by applying re-write rules. Transpilers are useful in several scenarios. For example, some languages, such as CoffeeScript and TypeScript, are intentionally designed to use a transpiler to convert from a more developer-friendly language into a more broadly-supported one. Sometimes it is helpful to transpile entire codebases from source languages that are obsolete or deprecated; for example the 2to3 transpiler tool used to port Python code from the deprecated version 2 to version 3. However, transpilers are far from perfect, and creating one requires significant development effort (and often customization).

TransCoder builds on advances in natural-language processing (NLP), in particular unsupervised NMT. The model uses a Transformer-based sequence-to-sequence architecture which consists of an attention-based encoder and decoder. Since obtaining a dataset for supervised learning would be difficult—it would require many pairs of equivalent code samples in both the source and target languages—the team opted to used monolingual datasets to do unsupervised learning, using three strategies. First, the model is trained on input sequences that have random tokens masked; the model must learn to predict the correct value for the masked tokens. Next, the model is trained on sequences that have been corrupted by randomly masking, shuffling, or removing tokens; the model must learn to output the corrected sequence. Finally, two version of these models are trained in parallel to do back-translation; one model learns to translate from the source to target language, and the other learns to translate back to the source.

TransCoder Pre-Training

Image Source: https://arxiv.org/abs/2006.03511

To train the models, the team mined samples from over 2.8 million open-source repositories from GitHub. From that they selected files in their languages of choice (Java, C++, and Python) and extracted individual functions. They chose to work at the function level for two reasons: function definitions are small enough to be contained in a single training input batch, and translating functions allows the model to be evaluated using unit tests. Although many NLP systems use the BLEU score to evaluate their translation results, the Facebook researchers note that this metric can be a bad fit for evaluating transpilers: results that are syntactically similar may have a high BLEU score but “could lead to very different compilation and computation outputs,” while programs with differing implementations that produce the same results may have a low BLEU score. Thus, the team chose to evaluate their transpiler results using a suite of unit tests. The tests were obtained from the GeeksforGeeks site by collecting problems which contained solutions written in all three target languages; this resulted in a set of 852 functions. The team compared TransCoder’s performance on this test set with two other existing transpiler solutions: the j2py Java-to-Python converter and Tangible Software Solutions C++-to-Java converter. TransCoder “significantly” outperformed both, scoring 74.8% and 68.7% on C++-to-Java and Java-to-Python respectively, compared to 61% and 38.3% for commercial solutions.

In a discussion on Reddit, one commenter contrasted this idea with GraalVM’s strategy of providing a single runtime that supports multiple languages. Another commenter opined:

[TransCoder] is a fun idea, but I think translating syntax is the easy part. What about memory management, runtime differences, library differences, etc?

In the TransCoder paper, the Facebook AI Research team notes that they intend to release the “code and pretrained models,” but have not done so at this time.
 

This content is in the AI, ML & Data