https://www.slashgear.com/artificial-fog-could-allow-lasers-to-one-day-replace-light-bulbs-07615772/

Artificial fog could allow lasers to one day replace light bulbs

Shane McGlaun – Apr 7, 2020, 6:55 am CDT
Artificial fog could allow lasers to one day replace light bulbs

Researchers at Imperial College London have created something that they describe as an artificial fog. The so-called artificial fog is able to scatter laser light producing high brightness with a low-power. The team believes that the discovery could eventually replace light bulbs thanks to the light scattering property of the artificial fog.

One key aspect is that the fog can produce a high brightness with low power requirements. Lightbulbs based on the new laser system would be more energy-efficient than both regular lightbulbs or LED light bulbs. The diffuser the team created makes lasers more useful in landing larger areas.

The team has also been able to tune the light for different colors, including white. White has been a difficult color to achieve using a laser. Laser-based lights are called laser diodes, and previously white light was created by shining laser onto phosphor materials. The problem was the process wasn’t efficient and was only able to create one color of light.

The research team was able to create white light by shining a red, blue, and green laser into the diffuser made of hexagonal boron nitride, which is in ultrathin material related to graphene. The diffuser the team developed is called aero-BN and is composed semi-transparent with randomly arranged and interconnected hexagonal boron nitride hollow microtubes.

The material is made up of 99.99 percent air. The three different lasers are used to create a white light that penetrates deeply into the diffuser, where they are strongly and randomly scattered multiple times by the nanoscopic walls of the microtubes. The team says that the diffuser acts like an artificial fog making a lot more diffuse, and at an optimum intensity of all three lasers, white light is emitted. The entire palette of colors is available by varying the ratio of the intensity of the three different colored lasers.

https://www.psychologytoday.com/us/blog/the-future-brain/202004/ai-translates-human-brain-signals-text

AI Translates Human Brain Signals to Text

UCSF neuroscientists’ brain-computer interface decodes at natural-speech rates.

Posted Apr 05, 2020

 

Geralt/Pixabay

Can technology and neuroscience one day enable people to silently type using only the mind? Now scientists are one step closer towards a computer interface driven by human thoughts. Neuroscientists at the University of California, San Francisco (UCSF) published a study last week in Nature Neuroscience that shows how their brain-computer interface (BCI) is able to translate human brain activity into text with relatively high accuracy and at natural-speech rates using artificial intelligence (AI) machine learning.

Neuroscience researchers Edward Chang, David Moses and Joseph Makin at the UCSF Center for Integrative Neuroscience, and Department of Neurological Surgery conducted their breakthrough study with funding in part from Facebook Reality Labs. Three years ago, Facebook announced at F8, an annual developer event focused on the future of technology, its initiatives in developing brain-computer interfaces via supporting a team of UCSF researchers who aim to help patients with brain damage to communicate. Ultimately, Facebook’s vision is to create a wearable device that non-invasively enables people to type by imagining themselves talking.

To achieve their recent breakthrough, the UCSF researchers used the approach of decoding a sentence at a time, similar to how modern machine translating algorithms work. To test their hypothesis, they trained a model using brain signals from electrocorticograms (ECoGs) during speech production and transcriptions of the corresponding spoken sentences. They used a restricted language limited to 30-50 unique sentences.

The participants of the study were four consenting patients at UCSF Medical Center who were already undergoing treatment for epilepsy and under clinical monitoring for seizures. The participants read sentences that were displayed on a computer screen out loud. Two participants read sentences from a set with picture descriptions with 30 sentences and around 125 unique words, the remaining two read sentences from blocks of 50 (or 60 words in the final block) from the MOCHA-TIMIT dataset which has 460 sentences and 1800 unique words.

As the participants read out loud, their brain activity was recorded using ECoG arrays of 120–250 electrodes that were surgically implanted on each patient’s cortical surface. Specifically, three participants were implanted with 256-channel grids over perisylvian cortices, and one participant with a 128-channel grid located dorsal to the Sylvian fissure.

The ECoG array provided input data to the encoder-decoder style artificial neural network (ANN). The artificial neural network processed the sequences in three phases.

In the first phase, the ANN learns temporal convolutional filters to downsample the signals from the ECoG data. The reason why this is done is to potentially address the limitation of a feed-forward network that may arise with similar features that might occur at different points in the sequence of ECoG data. The filter produces a hundred feature sequences.

In the next phase, these sequences are passed to the encoder recurrent neural network (RNN) which learns to summarize the sequences in a final hidden state and provides a high-dimensional encoding of the entire sequence.

In the last phase, the high-dimensional state produced by the encoder RNN is transformed by a decoder recurrent neural network. This second recurrent neural network learns to predict the next word in a sequence.

Overall, the neural network is trained in a manner where the encoder’s output values are near the target mel frequency cepstral coefficient (MFCC) while at the same time, the decoder assigns high probability to each target word. Training is done using stochastic gradient descent through backpropagation.

The researchers reported that their system had achieved higher accuracy rates than other existing brain-machine interfaces. The UCSF neuroscientists reported that with their technique, speech could be decoded from ECoG data with word error rates as low as three percent on datasets with 250-word vocabularies. According to the USCF researchers, other existing brain-machine interfaces that were limited to “decoding correctly less than 40% of words.” According to the researchers, what sets this solution apart is that their neural network has learned to “identify words, not just sentences, from ECoG data, and therefore that generalization to decoding of novel sentences is possible.”

Today information is transmitted to computing devices by speech, touch screens and keyboard. Will smartphones and other computing devices one day be guided by thinking, versus typing, finger touches or speaking? Through the interdisciplinary combination of neuroscience and artificial intelligence machine learning, scientists are further along in developing technologies that not only may help those with locked-in syndrome and speech disabilities, but also transform how we all interact and engage with smartphones and computing devices in the not-so-distant future.

About the Author

https://www.inc.com/marcel-schwantes/bill-gates-explains-what-separates-successful-leaders-from-everyone-else-in-2-words.html

Bill Gates Explains What Separates Successful Leaders From Everyone Else in 2 Words

If you call yourself a leader or aspire to be one, the Microsoft co-founder has some plain advice for you.

By Marcel SchwantesFounder and Chief Human Officer, Leadership From the Core@MarcelSchwantes
Bill Gates, the co-Founder of the Microsoft company and co-Founder of the Bill and Melinda Gates Foundation

If you call yourself a leader or aspire to be one, Bill Gates said something years ago that should resonate deep within the collective conscience of leaders everywhere. The co-founder of Microsoft pointed out:

“As we look ahead into the next century, leaders will be those who empower others.”

In two words, Gates nailed a defining characteristic of true leaders years in advance: empower others.

Let’s now reframe his quote to match the surreal circumstances in which we find ourselves today. When you think of what great leaders may be doing to pivot and meet the demands of a stay-at-home economy, what comes to mind?

 

For starters, more are waking up to the stark reality to make virtual collaboration the norm. And to Gates’ point, whatever you thought about leaders before the outbreak, one thing still remains true for either office or remote settings: Great leaders set themselves apart by effectively influencing and empowering other human beings.

Empowerment in crisis

Empowerment of people doesn’t change just because circumstances around you do — especially in a time of crisis.

 

To demonstrate what I have been witnessing in my interviews and observations of great leaders recently, plus my own personal experiences adjusting to the new work climate, here are four ways to empower the people around you — your employees — and take your leadership to the next level:

1. Empower your employees by putting them first

Every leader’s role right now involves proactively responding daily to the challenges facing their people. Whether it’s meeting daily to discuss what steps to take to protect employees and the business, good leaders are being empathetic to meet people’s needs. They are being mindful of the mental health needs of team members and their families as social isolation, potential ill health, economic hardships, and other uncertainties of life can weigh on people in unique ways.

2. Empower your employees through fun and flexibility

Empowering people in virtual settings shouldn’t be limited to work hours. To develop a spirit of community within remote tribes, many leaders are using video for virtual coffee or lunch breaks together. Saar Yoskovitz, CEO of Augury, an AI-based machine health solutions company keeps his people connected by holding digital happy hours over hangouts with his team. On Saint Patrick’s Day, people logged in, cameras on of course, and had their favorite drinks together from the comfort of their own homes. “Many of our people are now working from home while taking care of their kids — so we’re making sure to give them more flexibility, understanding, and fun distractions,” he shared over email.

3. Empower your employees by investing in digital solutions

According to research from PwC’s Digital IQ report, the top 5 percent of digital leaders invest a third more than other companies in the digital infrastructure of their business, leading to 77 percent of them seeing increased employee satisfaction. However, during times of crisis, many firms “pull back on investments and conserve cash but really [they] should be focusing on their digital investments,” says David Clarke, PwC global chief experience officer who spoke to me about the research. Clarke continues, “Companies that weren’t working this way before are going to quickly wake up to the need to make virtual collaboration and cross-functional work the norm.”

4. Empower your employees by seeking answers from them

Historically, leadership has been a practice to be performed by the few in places of higher status and positional authority. But in reality, when seeking solutions to problems, people with great ideas are everywhere. Don’t think you need to look up the ivory tower or externally to consultants for solutions to questions that can be answered from within your tribe. You’ll be surprised to find that the best resources are likely already residing in a nearby Slack channel or just a Zoom conference call away.

https://www.tomsguide.com/news/iphone-12-design-just-leaked-for-all-four-models

iPhone 12 design just leaked for all four models

iPhone 12 reverse wireless charging

(Image credit: Donel Bagrov)

Apple’s design for the iPhone 12 is pretty much done and dusted, according to a leak from within Cupertino.

Front Page Tech’s Jon Prosser tweeted an image that shows there’ll be four iPhone 12 models, comprising a 5.4-inch and 6.1-inch vanilla iPhone 12, and a 6.1-inch iPhone 12 Pro and 6.7-inch iPhone 12 Pro Max. The image confirms a lot of information gleaned from previous leaks and rumors, especially those of analyst and Apple oracle  Ming-Chi Kuo, and shows what to expect when the iPhone 12 launches this fall.

Jon Prosser

@jon_prosser

Prototyping for iPhone 12 devices is just about finalized!

Final details line up pretty well with what Kuo said last year! 🤯

Expect to see CAD renders of the devices within the next month or two from your favorite leakers! 👀

Now let’s see if Apple can get them out by EOY!

View image on Twitter

Prosser noted that he added in the model names to the leaked image, but said that the rest of the information comes from inside Apple. The two iPhone 12 models come with a dual-camera array and aluminium bodies, while the larger iPhone 12 Pro models have a trio of rear- cameras and ave a stainless steel construction.

As such, the Pro models are rather similar to the current iPhone 11 Pro but also come with a LiDAR sensor, as seen on the iPad Pro 2020.

All four models share a shrunken notch, where the front camera and Face ID tech resides, along with an Apple A14 CPU and 5G compatibility.

Prosser also said that we should see more accurate fan-made designs of these phones coming in the next few months, as the locked-off Apple designs are leaked and handed off to render artists.

As it stands, we still anticipate an iPhone 12 launch in September this year. Apple may have to delay either the reveal or the retail launch due to coronavirus-related disruptions, as several rumors have suggested. But for now, we hope Apple can find a way around the potential problems that lie between building the phones and sending them out to stores worldwide.

https://venturebeat.com/2020/04/07/miso-robotics-launches-fundraising-campaign-to-develop-autonomous-kitchen-robots/

Miso Robotics launches crowdfunding campaign for autonomous kitchen robots

Miso Robotics ROAR

Image Credit: Miso Robotics

Today marks the official start of Miso Robotics‘ series C equity crowdfunding campaign, which the startup announced in November 2019 in collaboration with Circle’s SeedInvest and Wavemaker Labs. After securing U.S. Securities and Exchange Commission approval, Miso raised $2.6 million in reservations (with over $860,000 underway) following an $80 million pre-money valuation last year, with the goal of raising $30 million over the next few months.

Miso says the fresh capital will enable it to further develop its suite of industrial kitchen automation robots. As declines in business resulting from the COVID-19 pandemic place strains on the hospitality segment, it’s the company’s belief that robots working alongside human workers can cut costs while improving efficiency.

Equity crowdfunding — sometimes referred to as crowd-investing, investment crowdfunding, or crowd equity — enables backers to contribute funding in return for ownership of a small piece of a company. The value of shares goes up or down with the company’s fortunes.

Miso has long claimed that its flagship robot Flippy — and Flippy’s successor, Miso Robot on a Rail (ROAR) — can boost productivity by working with humans as opposed to replacing them. ROAR, which is expected to begin shipping commercially by the end of 2020 for around $30,000, or half the cost of a single Flippy unit, can be installed on a floor or under a standard kitchen hood, allowing it to work two stations and interact with a cold storage hopper. On the software side, it benefits from improvements to Miso AI (Miso’s cloud-based platform) that expand the number of cookable food categories to over a dozen, including chicken tenders, chicken wings, tater tots, french fries and waffle fries, cheese sticks, potato wedges, corn dogs, popcorn shrimp and chicken, and onion rings.

ROAR can prep hundreds of orders an hour thanks to a combination of cameras and safety scanners, obtaining frozen food and cooking it without assistance from a human team member. It alerts nearby workers when orders are ready to be served, and it takes on tasks like scraping grills, draining excess fry oil, and skimming oil between frying as it recognizes and monitors items like baskets and burger patties in real time. Plus, it integrates with point-of-sale systems (via Miso AI) to route orders automatically and optimize which tasks to perform.

Miso says it saw “tremendous success” last year, serving up more than 15,000 burgers and more than 31,000 pounds of chicken tenders and tots. Flippy will soon flip burgers at more than 50 CaliBurger locations globally, and so far it’s been deployed at Dodger Stadium in Los Angeles, Chase Field in Phoenix, and CaliBurger locations in Pasadena.

More recently, Miso said it would deploy new tools to its platform in CaliBurger restaurants as part of a pilot with CaliGroup intended to improve safety and health standards. In the coming weeks, in partnership with payment provider PopID, the company will install a thermal-based screening device in a CaliBurger location in Pasadena that attaches to doors to measure the body temperatures of people attempting to enter the restaurant. Miso also says it will also install physical PopID terminals so that guests can transact without touching a panel, using cash, or swiping a credit card — all of which can transfer pathogens.

To date, Pasadena, California-based Miso Robotics — whose team hails from Caltech, Cornell, MIT, Carnegie Mellon, Art Center, and the University of North Carolina at Chapel Hill — raised around $15 million in earlier funding rounds from investors that include parent company CaliGroup, MAG Ventures, Acacia Research Corporation, Levy, and OpenTable CTO Joseph Essas. More recently, it nabbed an investment of $11 million from CaliGroup, and it launched an equity crowdfunding round targeting $30 million in collaboration with Circle’s SeedInvest and Wavemaker Labs.

https://www.theguardian.com/lifeandstyle/2020/apr/07/sleep-and-exercise-down-back-pain-and-tv-up-in-uk-lockdown

Sleep and exercise down, back pain and TV up in UK lockdown

Survey finds a third eating less healthily, but TV figures suggest daily surge for PE With Joe

 

https://www.psypost.org/2020/04/cardiovascular-exercise-associated-with-reduced-levels-of-sexual-dysfunction-56404

Cardiovascular exercise associated with reduced levels of sexual dysfunction

Sexual dysfunction is common among adults and takes a toll on the quality of life for both men and women. At least one form of sexual dysfunction affects as much as 40-50% of women, and among men, the probability of dysfunction increases with age rising from 10% at 40 up to 50% at 70. A study published in The Journal of Sexual Medicine finds these problems are less common among physically active adults engaging in cardiovascular training on a weekly basis.

Researchers from the University of California aimed to determine whether increased levels of exercise would be protective against sexual dysfunction (SD). More precisely, they were looking into cardiovascular activities like swimming, biking and running and the effect they have on erectile dysfunction in men and orgasm satisfaction and arousal in women.

To study this, researchers used data from an international online survey recruiting 3,906 men and 2,264 women who were active cyclists, runners, and swimmers. Participants gave detailed information on the time they spent doing these activities, average speed, and distance, as well as the number of days in a week they exercised. This was compared to information on their reported sexual activity.

The results showed a positive effect of physical activity for both men and women concerning SD levels. For example, men who cycle approximately 10 hours per week had 22% lower odds than those who cycled less than 2 hours in a week, and women reported less orgasm dissatisfaction and arousal difficulty. Women who took part in this study initially had lower scores of SD than the general population, which additionally supports the hypothesis that physical activity plays a role in lower levels of SD as this sample of women was more physically active than the average.

There are various explanations for these findings. Possible mechanisms include better blood supply of the penis and the clitoris, greater pelvic floor muscle strength or the fact that physical activity is associated with the prevention of cardiovascular disease, diabetes, hypertension and obesity, and these same conditions are associated with increased risk of erectile dysfunction. So in men, exercise may be improving sexual function by simply reducing risks of these medical conditions.

The authors explained the importance of these results: “Men and women at risk for sexual dysfunction regardless of physical activity level may benefit by exercising more rigorously.”

“In addition to encouraging sedentary populations to begin exercising as previous studies suggest, it also might prove useful to encourage active patients to exercise more rigorously to improve their sexual functioning”.

Future studies are encouraged to include less active reference groups, factors like depression and mood, menopausal status and hormone therapy, as well as known psychological benefits of exercise to shed more light on the relationship between physical activity and sexual dysfunction.

The study, “Exercise Improves Self-Reported Sexual Function Among Physically Active Adults”, was authored by Kirkpatrick B. Fergus, Thomas W. Gaither, Nima Baradaran, David V. Glidden, Andrew J. Cohen and Benjamin N. Breyer

https://mobilesyrup.com/2020/04/07/vizio-smartcast-tv-30-free-streaming-channels/

Vizio SmartCast TVs to get 30 free streaming channels, including CBC News Free channels cover news, sports, entertainment, lifestyle and more

By Jonathan Lamont@Jon_LamontAPR 7, 2020 8:02 AM EDT0 COMMENTS

As TV viewership grows amid the COVID-19 pandemic, Vizio will bring 30 new free channels to its smart TV lineup. Vizio TVs with the company’s SmartCast software will now have a new ‘Free Channels’ row on the home screen where users can find the new channels. The 24-hour streaming TV channels cover content including news, entertainment, lifestyle and more.

Starting today, Vizio TV owners will be able to find the following free channels on their SmartCast home screen: USA Today CBC News USA Today Sportswire Fubo Sports Network TMZ Hollywire Hungry Food52 The Design Network Dust Magellan TVNow Docurama CONtv And more…

Along with the free TV channels, Vizio offers access to its ‘WatchFree’ service, powered by Pluto. It has over 150 free streaming TV channels with news, movies, sports and more. Plus, Vizio TV owners can also access Netflix, Disney+, Amazon Prime Video, YouTube and other streaming services.

https://www.nextplatform.com/2020/04/07/changing-conditions-for-neural-network-processing/

CHANGING CONDITIONS FOR NEURAL NETWORK PROCESSING

Over the last few years the idea of “conditional computation” has been key to making neural network processing more efficient, even though much of the hardware ecosystem has focused on general purpose approaches that rely on matrix math operations that brute force the problem instead of selectively operate on only the required pieces. In turn, given that this was the processor world we live in, innovation on the frameworks side kept time with what devices were readily available.

Conditional computation, which was proposed (among other places) in 2016, was at first relegated to model development conversations. Now, however, there is a processor corollary that blends the best of those concepts with a novel packet processing method to displace traditional matrix math units. The team behind this effort blends engineering expertise from previous experiences building high-end CPU cores at ARM, graphics processors at AMD and Nvidia, and software and compiler know-how from Intel/Altera and might have something worth snatching us out of those “yet another AI chip startup” doldrums.

At this point in this chip startup space it takes actual differentiation to make waves in the neural network processor pond. What’s interesting about this effort, led by the startup in question, Tenstorrent, is that what they’ve developed is not only unique, it is answers the calls from some of the most recognizable names in model development (LeCun, Bengio, and Hinton, among others) for a reprieve from that matrix math unit stuffing approach that is not well-suited to neural network model sizes that are growing exponentially. The calls from this community have been for a number of things, including the all-important notion of conditional computation, freedom from matrix operations and batching, the ability to deal with sparsity, and of course, scalability.

In short, relying on matrix operations is so 2019. The ability to do fine-grained conditional computation instead of taking big matrices and running math does run into some walls. The real value is in being able to cut into the matrix and put an “if” statement in, removing that direct tie between the amount of required computation and the model size. Trying to do all things well (or just select things like inference or training) on a general purpose device (and even in some cases, specialty ones focused on just inference for specific workloads) there is much left on the table, especially with what Tenstorrent CEO, Ljubisa Bajic calls, the “brute force” approach.

But how to get around brute force when it’s…well, the most forceful? The answer is surprisingly simple, at least from the outside. And it’s a wonder this hasn’t been done before, especially given all the talk about conditional computation for (increasingly vast) neural networks and what they do, and don’t, need. For nearly every task in deep learning, some aspects are simple, others complex, but current hardware is built without recognition of that diversity. Carving out the easy stuff is not difficult, it’s just not simple with current architectures based on fixed sized inputs with high padding.

Tenstorrent has taken an approach that dynamically eliminates unnecessary computation, thus breaking the direct link between model size growth and compute/memory bandwidth requirements. Conditional computation enables adaptation to both inference and training of a model to the exact input that was presented, like adjusting NLP model computations to the exact length of the text presented, and dynamically pruning portions of the model based on input characteristics.

Tenstorrent’s idea is that for easier examples in neural networks, depending on the input, it’s not likely that all the neurons should be running but rather, one should be able to feed in an image and run a different type of neural network that’s determined at runtime. Just doing this can double speed in theory without hurting the quality of results much, but even better, if you’re redesigning the model entirely for conditional computation it can go far beyond that. It’s there where the solution gets quite a bit more complicated.

Each of the company’s “Grayskull” processors are essentially packet processors that take a neural network and factor the groups of numbers that make it up into “packets” from the beginning.

Imagine a neural network layer with two matrices that need to be multiplied. With this approach, from the very beginning these are broken into “Ethernet-sized chunks” as Bajic describes. Everything is done on the basis of packets from that point forward, with the packets being scheduled onto a grid of these processors connected via a custom network, partially a network on chip, partially via an off-chip interconnect. From the beginning point to the end result everything happens on packets without any of the memory or interconnect back and forth. The network is, in this case, the computer.

The blue square at the top is basically a tensor for numbers like input for a neural network layer. That gets broken into sub-tensors (those “Ethernet-sized chunks” Bajic referred to), which are then framed into a collection of packets. Their compiler then schedules movement of those packets between cores on one or multiple chips and DRAM. The compiler can then target (one or some of the chip) when the configuration is fed into it, to create a right-sized, scalable machine.

All of this enables changing the way inference runs at runtime based on what’s been entered, a marked difference from the normal compile-first approaches. In this world, we have to live with whatever the compiler produces so the ability to adapt to anything (including size of the input, an increasingly important concern) but this comes with a lot of wasted cycles with a lot of padding. Bajic uses a BERT example and says that input can vary from 3-33 words. Currently, everyone has to run a superset number but the brute force of making one-size-fits-all leaves quite a bit of performance and efficiency on the table.

From left, “Grayskull” chip (12 nm, Global Foundries), characterization, and “production” board running in-house.

The Tenstorrent architecture is based on “Tensix” cores, each comprising a high utilization packet processor and a powerful, programmable SIMD and dense math computational block, along with five efficient and flexible single-issue RISC cores. The array of Tensix cores is stitched together with a custom, double 2D torus network on a chip (NoC), at the core of the company’s emphasis on minimal software burden for scheduling coarse-grain data transfers. Bajic says flexible parallelization and complete programmability enable runtime adaptation and workload balancing for the touted power, runtime, and cost benefits.

“Grayskull” has 120 Tensix cores, with 120MB of local SRAM with 8 channels of LPDDR4, supporting up to 16GB of external DRAM and 16 lanes of PCI-E Gen 4. At the chip thermal design power set-point required for a 75W bus-powered PCIE card, Grayskull achieves 368TOPS. The training device is 300W. As clarification, each chip has 120 cores each with a single MB or SRAM. Some workloads can fit comfortably on that, others can pop out to DRAM (16GB/DDR4).

 

“In addition to doing machine learning efficiently, we want to build a system that can break the links between model size growth and memory and compute limits, which requires something that is full stack, that’s not just about the model but the compiler for that model. With current hardware, even if you manage to reduce the amount of computation you still need huge machines, so what is needed is an architecture that can enable more scaled-out machines than what we have today. Anyone serious about this needs to scale like a TPU pod or even better. The quick changes in models people are running now versus a few years ago demonstrate that zooming in on a particular workload is a risky strategy, just ask anyone who built a ResNet-oriented machine,” Bajic says.

“The past several years in parallel computer architecture were all about increasing TOPS, TOPS per watt and TOPS per cost, and the ability to utilize provisioned TOPS well. As machine learning model complexity continues to explode, and the ability to improve TOPS oriented metrics rapidly diminishes, the future of the growing computational demand naturally leads to stepping away from brute force computation and enabling solutions with more scale than ever before.”

There is still plenty to watch in the AI chip startup world, but differences are often minute and nuanced. This is indeed also nuanced but the concept is valid and has been demanded for years from hardware makers. What is also notable here, in addition to the team (Bajic himself worked on early Hypertransport, early Tegra efforts for autonomous vehicles on the memory subsystems side at Nvidia, and more, for instance) is that they have brought two chips to bear for $34 million in current funding (Eclipse and Real Ventures with private backing from what Bajic says is a famous name in the industry and in hardware startups) with “quite a bit” left over to keep rolling toward their first production chips in fall 2020.

Definitely one to watch. Bajic says he knows that the hyperscale companies are ready to try on anything that might be promising and knows that the AI chip startup space is crowded but thinks this approach is different enough in terms of what’s lies ahead for model size growth and complexity and what will be in demand when general purpose or matrix-based processors aren’t up to the task.

https://www.psypost.org/2020/04/a-single-high-dose-of-psilocybin-alters-brain-function-up-to-one-month-later-56399

A single high dose of psilocybin alters brain function up to one month later

New research provides evidence that the active ingredient in so-called magic mushrooms can affect brain processes related to emotional functioning long after the substance has left one’s body. The findings, published in Scientific Reports, shed new light on the long-term effects of psilocybin.

Rather than examining the brain while it’s under the influence of psilocybin, the researchers from Johns Hopkins University School of Medicine were interested in the enduring impact of the substance.

“Nearly all psychedelic imaging studies have been conducted during acute effects of psychedelic drugs. While acute effects of psychedelics on the brain are of course incredibly interesting, the enduring effects of psychedelic drugs on brain function have great untapped value in helping us to understand more about the brain, affect, and the treatment of psychiatric disorders,” said Frederick S. Barrett (@FredBarrettPhD), an assistant professor and the corresponding author of the study.

In the study, 12 volunteers received a single administration of a high dose of psilocybin. One day before, one week after, and one month after psilocybin administration, the volunteers completed three different tasks to assess the processing of emotional information (specifically, facial expressions) while the researchers used magnetic resonance imaging to record their brain activity. During these three sessions, the volunteers also completed various surveys about their emotional functioning.

The researchers found that self-reported emotional distress was reduced one week after psilocybin administration, but returned to baseline levels at one month after psilocybin administration. Barrett and his colleagues also observed decreases in amygdala responses to emotional information one week after psilocybin administration, but this also returned to normal at one month post-psilocybin.

In addition, the researchers found increases in resting-state functional connectivity, which measures how blood oxygen level-dependent signals are coordinated across the brain, at both one week and one month after psilocybin administration.

“A single high dose of psilocybin, administered to properly screened individuals in a carefully controlled setting, can have lasting positive effects on emotional functioning in healthy individuals. These effects were reflected in transient changes in the function of brain regions that support emotional processing,” Barrett told PsyPost.

Because of the small sample size and lack of a control group, however, the findings should be considered preliminary.

“This study needs to be replicated in a larger sample with proper experimental controls, and we need to determine whether psilocybin exerts the observed effects by directly acting on emotional brain circuits, or by acting on brain circuits that control attention and cognition that may have down-stream effects on emotional brain circuits,” Barrett explained.

The study, “Emotions and brain function are altered up to one month after a single high dose of psilocybin“, was authored by Frederick S. Barrett, Manoj K. Doss, Nathan D. Sepeda, James J. Pekar, and Roland R. Griffiths.