https://medicalxpress.com/news/2020-04-neural-circuits-vision.html

Neural circuits mapped: Now we understand vision better

nerve
Credit: CC0 Public Domain

Researchers from Aarhus University have discovered the function of a special group of nerve cells which are found in the eye and which sense visual movement. The findings give us a completely new understanding of how conscious sensory impressions occur in the brain. This knowledge is necessary to be able to develop targeted and specific forms of treatment in the future for diseases which impact the nervous system and its sensory apparatus, such as dementia and schizophrenia.

According to the researchers, the study contributes with new and specific knowledge in a number of areas. It has been known for decades that the eye contains nerve cells that sense and signal the direction of movement when an object moves in our . However, how these nerve cells contribute to the nerve cells found in the cerebral cortex remained a mystery.

The  is without doubt the most complicated organ in our body, and knowledge about how the brain carries out all of its functions is still inadequate. One of the brain’s most important functions is its ability to track and perceive sensory impressions from our surroundings.

In a recently published study, researchers from Aarhus University have mapped the function of a special group of nerve cells that are found in the eye. The results—which have been presented in Nature Communications—may in the longer term enable researchers to understand and treat diseases where the sensory perceptions of the brain are dysfunctional, as is the case for people with dementia or schizophrenia who experience hallucinations.

“We’ve described a specialised neural circuit which sends information about visual movement from the nerve cells in the eye and up to the nerve cells in the cerebral cortex. This is important so we can begin to understand the mechanisms for how conscious sensory impressions arise in the brain,” explains Rune Nguyen Rasmussen, who is an author of the study.

Perceiving the world as still images

“Without the ability to perceive visual movement, we’d perceive the world as still images, leading to major behavioural consequences such as those seen in people who have the disease akinetopsia and have lost the ability to sense the movement of objects.”

By combining a range of advanced experimental methods and using a mouse as an animal model in which special nerve cells in the eye are affected, the researchers were able to answer the question of how the eye’s nerve cells contribute to the cerebral cortex’s nerve cells.

In the study, the researchers demonstrated that a special group of   in the eye ensure that the  in the visual cerebral cortex are capable of sensing and responding to visual movement which moves at high speed. According to the researchers, this is interesting because it indicates that what has previously been believed to arise in the  actually already arises in the earliest stage of vision, i.e., in the eye.

Vision mitigates hazards

Of all of our senses, vision is particularly important because it enables us to discover and avoid hazards. For example, vision ensures that we quickly and precisely can determine where cars and bicycles are approaching from and how quickly they are moving when we cross a busy road during the rush hour.

“One important remaining question after our study is how and when this neural circuit is involved in different aspects of behaviour. So the next step in our research will be to begin a research project with aim of understanding whether this  is involved in sensing visual  when mice move around and need to navigate in their surroundings,” says Nguyen Rasmussen.


Explore further

Not seeing the trees for the forest


More information: Rune Rasmussen et al, A segregated cortical stream for retinal direction selectivity, Nature Communications (2020). DOI: 10.1038/s41467-020-14643-z

Journal information: Nature Communications
Provided by Aarhus University

https://appleinsider.com/articles/20/04/16/apples-smart-glasses-could-use-holography-to-keep-weight-down-and-image-quality-up

Apple researching smart glasses with holography to keep weight down, image quality up

Apple is researching the use of reflected holograms to keep the size of the augmented reality “Apple Glasses” to a minimum, yet keep the image quality for the user as high as possible.

The Magic Leap One Lightwear AR goggles, an example of an AR headset

The Magic Leap One Lightwear AR goggles, an example of an AR headset

Apple has been working on a set of augmented reality wearable peripherals for some time. A constant question about any future “Apple Glasses” peripheral has been over how Apple can maintain its preference for sleek, high-performance systems instead of the bulk in some present headsets. A new patent provides part of the answer.

“Optical System with Dispersion Compensation,” is a patent application that describes a method that combines a small “holographic optical element” which may project onto a reflective surface. So the holographic projector could be in the rim of Apple Glasses, and the fully or semi-reflective surface could be the lenses in front of the user’s eyes.

“Head-mounted displays typically involve near-eye optics to create ‘virtual’ images,” says the patent. “In the past, HMDs have dealt with a variety of technical limitations that reduced image quality and increased weight and size. Additionally, conventional mirrors and grating structures have inherent limitations.”

The patent says that the angle of reflection between the optical element and the conventional mirror has a “suboptimal” impact on performance. This gets worse if there are many different projectors with “multiple reflective axes that covary unacceptably with incidence angle and/or wavelength.”

Apple’s answer is to use a type of reflective surface called a “skew mirror.” This is a material that while it does reflect light, may only reflect certain frequencies or colors, and reflects it all out at a different angle. The glasses could use different skew mirrors to route colors from different projectors and have them all be directed into the wearer’s eyes.

The advantage of a smaller system is clear, but if it didn’t come with problems, everyone would be doing it. This projection and reflection is known as dispersion, and dispersion has issues.

“Dispersion may cause chromatic aberrations in optical devices,” says the patent. “These chromatic aberrations can have a degrading effect on an image of an optical reflective device.”

What the wearer could see without a system like Apple’s, is perhaps similar to when binocular lenses are not quite aligned and the edges of the image look hazy or have a shifting rainbow-like color effect. Unlike binoculars, these types of augmented reality images could also appear distractingly low-resolution.

Detail from the patent showing how an optical element can be reflected into the wearer's eyes

Detail from the patent showing how an optical element can be reflected into the wearer’s eyes

The lenses in a device like Apple Glasses need to be able to let the wearer see the real world as well as display the computer-generated images. What Apple is proposing is that the material also be able to reflect multiple images into the wearer’s eyes —without the wearer noticing that the image is created from more than one source.

Apple proposes that the optical element projecting these images be holographic. Much of this patent is concerned with how one or more holographic projections could be recorded.

“Holographic optical elements may be used in head mounted devices or other systems and may be constructed from a recording medium,” it says. “The recording medium may sometimes be referred to herein as a grating medium. The grating medium may be disposed between waveguide substrates. An input coupler such as a prism may couple light into the waveguide.”

Having the ability to replay or relay holographically-recorded images, Apple’s proposal describes a system where the lenses in Apple Glasses “have reflective axis angles that vary by less than 1.0 degree.”

The invention is credited to five inventors, Jonathan B. Pfeiffer, Adam C. Urness, Friso Schlottau, Mark R. Ayres, and Vikrant Bhakta. Between them, they have dozens of related previous patents, including Bhakta’s “Projection device with field splitting element,” and Ayres’s “Process for holographic multiplexing.”

This patent follows previous others that are also concerned with optimizing images to be displayed in AR glasses.

https://www.notebookcheck.net/Raspberry-Pi-Popular-single-board-computer-harnessed-for-open-source-ventilator.461802.0.html

Raspberry Pi: Popular single-board computer harnessed for open-source ventilator

The Mascobot is built around a Raspberry Pi. (Image source: Marco Mascorro)
The Mascobot is built around a Raspberry Pi. (Image source: Marco Mascorro)
Posted to GitHub, the Mascobot utilises a Raspberry Pi as part of an open-source ventilator. The project uses an Arduino and off the shelf parts, too. Testing is already underway, with human trials planned for the beginning of May.

Brought to our attention by the BBC and CNX Software, the Mascobot is not your ordinary Raspberry Pi project. Designed in response to the COVID-19 pandemic, the Mascobot is the work of Marco Mascorro, a robotics engineer from California. Utilising a Raspberry Pi, an Arduino and off the shelf parts, Mascorro has published the files for the Mascobot on GitHub.

However, it is unlikely to be ready to treat patients during the current pandemic. Nonetheless, Colombian authorities have fast-tracked tests, with the Pontifical Xavierian University and Los Andes University hoping to start human trials by the beginning of May.

Mascorro has published a video on YouTube explaining the ventilator in detail. The video also offers a look at the Mascobot in action, along with a look at its software.

https://www.sciencedaily.com/releases/2020/04/200415133654.htm

When damaged, the adult brain repairs itself by going back to the beginning

Date:
April 15, 2020
Source:
University of California – San Diego
Summary:
When adult brain cells are injured, they revert to an embryonic state, say researchers. In their newly adopted immature state, the cells become capable of re-growing new connections that, under the right conditions, can help to restore lost function.

When adult brain cells are injured, they revert to an embryonic state, according to new findings published in the April 15, 2020 issue of Nature by researchers at University of California San Diego School of Medicine, with colleagues elsewhere. The scientists report that in their newly adopted immature state, the cells become capable of re-growing new connections that, under the right conditions, can help to restore lost function.

Repairing damage to the brain and spinal cord may be medical science’s most daunting challenge. Until relatively recently, it seemed an impossible task. The new study lays out a “transcriptional roadmap of regeneration in the adult brain.”

“Using the incredible tools of modern neuroscience, molecular genetics, virology and computational power, we were able for the first time to identify how the entire set of genes in an adult brain cell resets itself in order to regenerate. This gives us fundamental insight into how, at a transcriptional level, regeneration happens,” said senior author Mark Tuszynski, MD, PhD, professor of neuroscience and director of the Translational Neuroscience Institute at UC San Diego School of Medicine.

Using a mouse model, Tuszynski and colleagues discovered that after injury, mature neurons in adult brains revert back to an embryonic state. “Who would have thought,” said Tuszynski. “Only 20 years ago, we were thinking of the adult brain as static, terminally differentiated, fully established and immutable.”

But work by Fred “Rusty” Gage, PhD, president and a professor at the Salk Institute for Biological Studies and an adjunct professor at UC San Diego, and others found that new brain cells are continually produced in the hippocampus and subventricular zone, replenishing these brain regions throughout life.

“Our work further radicalizes this concept,” Tuszynski said. “The brain’s ability to repair or replace itself is not limited to just two areas. Instead, when an adult brain cell of the cortex is injured, it reverts (at a transcriptional level) to an embryonic cortical neuron. And in this reverted, far less mature state, it can now regrow axons if it is provided an environment to grow into. In my view, this is the most notable feature of the study and is downright shocking.”

To provide an “encouraging environment for regrowth,” Tuszynski and colleagues investigated how damaged neurons respond after a spinal cord injury. In recent years, researchers have significantly advanced the possibility of using grafted neural stem cells to spur spinal cord injury repairs and restore lost function, essentially by inducing neurons to extend axons through and across an injury site, reconnecting severed nerves.

Last year, for example, a multi-disciplinary team led by Kobi Koffler, PhD, assistant professor of neuroscience, Tuszynski, and Shaochen Chen, PhD, professor of nanoengineering and a faculty member in the Institute of Engineering in Medicine at UC San Diego, described using 3D printed implants to promote nerve cell growth in spinal cord injuries in rats, restoring connections and lost functions.

The latest study produced a second surprise: In promoting neuronal growth and repair, one of the essential genetic pathways involves the gene Huntingtin (HTT), which, when mutated, causes Huntington’s disease, a devastating disorder characterized by the progressive breakdown of nerve cells in the brain.

Tuszynski’s team found that the “regenerative transcriptome” — the collection of messenger RNA molecules used by corticospinal neurons — is sustained by the HTT gene. In mice genetically engineered to lack the HTT gene, spinal cord injuries showed significantly less neuronal sprouting and regeneration.

“While a lot of work has been done on trying to understand why Huntingtin mutations cause disease, far less is understood about the normal role of Huntingtin,” Tuszynski said. “Our work shows that Huntingtin is essential for promoting repair of brain neurons. Thus, mutations in this gene would be predicted to result in a loss of the adult neuron to repair itself. This, in turn, might result in the slow neuronal degeneration that results in Huntington’s disease.”


Story Source:

Materials provided by University of California – San Diego. Original written by Scott LaFee. Note: Content may be edited for style and length.


Journal Reference:

  1. Gunnar H. D. Poplawski, Riki Kawaguchi, Erna Van Niekerk, Paul Lu, Neil Mehta, Philip Canete, Richard Lie, Ioannis Dragatsis, Jessica M. Meves, Binhai Zheng, Giovanni Coppola, Mark H. Tuszynski. Injured adult neurons regress to an embryonic transcriptional growth stateNature, 2020; DOI: 10.1038/s41586-020-2200-5

Cite This Page:

University of California – San Diego. “When damaged, the adult brain repairs itself by going back to the beginning.” ScienceDaily. ScienceDaily, 15 April 2020. <www.sciencedaily.com/releases/2020/04/200415133654.htm>.

https://www.notebookcheck.net/BreadBee-A-tiny-alternative-to-the-Raspberry-Pi-Zero-that-supports-Linux-and-costs-just-US-10.461781.0.html

BreadBee: A tiny alternative to the Raspberry Pi Zero that supports Linux and costs just US$10

BreadBee: A tiny alternative to the Raspberry Pi Zero that supports Linux. (Image source: Daniel Palmer)
BreadBee: A tiny alternative to the Raspberry Pi Zero that supports Linux. (Image source: Daniel Palmer)
The BreadBee is a very compact single-board computer that runs Linux. Developed by Daniel Palmer, the BreadBee features 64 MB of RAM, an MStar MSC313E Cortex-A7 SoC, two multi-pin headers and an RJ45 Ethernet port.

The BreadBee is an ultra-compact board for developers. Measuring in at just 32 x 30 mm, the BreadBee is considerably smaller than other SBCs like the Raspberry Pi Zero. The BreadBee is rather tall though as developer Daniel Palmer has included an Ethernet port. The RJ45 port can transmit data at up to 100 MBit/s. The BreadBee does not support Wi-Fi, but a future model may have an Ampak Wi-Fi module in place of the Ethernet port.

The BreadBee is based on an MStar MSC313E processor, which integrates an ARM Cortex-A7 core with NEON and FPU that runs at 1.0 GHz. There is also 64 MB of DDR2 RAM and 16 MP of SPI NOR flash memory.

Additionally, Palmer has included two multi-pin headers. Specifically, there is a 24-pin dual-row header with a 2.54 mm pitch on one side, which support SPI, I2C, UART and GPIO. On the reverse, Palmer has included a 21-pin header with a 1.27 mm pitch that supports SD/SDIO, USB 2.0 and GPIO.

Furthermore, the BreadBee has a micro USB port for power, which operates at 5 V. The board runs embedded Linux, too. Palmer is hoping to crowdfund the BreadBee on Crowd Supply, with units costing US$10. There is no fixed date on when the campaign will go live, but we shall keep you updated when it does.

(Image source: Daniel Palmer)

https://www.phonearena.com/news/some-believe-that-siri-sees-the-end-of-the-world-ahead_id123912

Why Siri is scaring the hell out of some iPhone users

Why Siri is scaring the hell out of some iPhone users
For many people, what they see going on around them is a sure sign that the world is coming to an end. We’re in the middle of a global pandemic, we’ve recently seen tornadoes and vicious wind storms zip through several states, and Apple and Google are working together on a contact tracing tool. And if that isn’t enough proof that something strange is going on, Siri also sees the world coming to an end very soon according to some iPhone users.

Is Siri calling for the world to end soon?

Fast Company reports that some people have been asking Apple’s flawed digital assistant questions that they really don’t need answers to. For example, to kill time several iPhone users asked, “Hey Siri, how long until 2020 ends?” The proper answer today would be 260 days. But some are receiving a response from Siri that is scaring the hell out of them. The virtual digital assistant is telling some users that there are only minutes to hours left in the year which is being interpreted as the end of the world.

But unlike most of the bad answers that Siri might give you, there is actually a valid reason for this response. Since much of the world uses a 24 hour clock (like military time where 2 pm is known as 1400 hours), when Siri was asked how long until 2020 ends, it was calculating the number of hours between the current time and when 8:20 pm (2020).
We just asked Siri the question, “How long until 2020 ends?” and in typical Siri fashion, we were told that the last day of 2020 will be Thursday, December 31st, 2020. That really doesn’t answer the question. When we asked Google Assistant the same question, it also supplied us with the wrong answer. Google’s digital helper incorrectly said 261 days were left in the year as the calculation was based on Fast Company’s article which was published on April 15th. Since 2020 is a leap year, including today, there are 160 days left in the year.
As we said earlier this year, Apple definitely needs to work on Siri. Part of the problem is that the digital assistant often does not understand the question being asked it. Earlier this month, Apple might have taken a huge step forward toward improving Siri by reportedly purchasing an Irish company called Voysis. The latter offers a platform allowing digital assistants to understand the human language better. Voysis fits the MO of a company that Apple would be interested in because it is small, under the radar, and could deliver improvements to Siri very quickly. A purchase of Voysis could join other similar transactions made by Apple in the past including the 2010 acquisition of SIRI (which resulted in Siri, of course), the 2011 purchase of biometric firm AuthenTec (which led to the development of Touch ID), and the 2014 purchase of Beats Audio (from which Apple Music launched). Of course, the Beats Music purchase was a little out of the box for Apple since it cost the company $3 billion and remains its largest purchase of all time.
So now that we know that Siri isn’t warning us about an abrupt end to the world (even though it still might feel like it) you can breathe a sigh of relief. If you own an iOS device and are generally unsatisfied with Siri, you can install the Google Assistant app from the App Store. Using the Shortcuts feature, we can activate Google Assistant by voice although you might be embarrassed if someone hears you doing this. If you’ve correctly setup Shortcuts, you can say “Hey Siri, Okay Google” to activate Google’s superior digital helper.

https://thenextweb.com/syndication/2020/04/16/how-deep-learning-algorithms-can-be-used-to-measure-social-distancing/

How deep learning algorithms can be used to measure social distancing

How deep learning algorithms can be used to measure social distancing

Many countries have introduced social distancing measures to slow the spread of the COVID-19 pandemic. To understand if these recommendations are effective, we need to assess how far they are being followed.

To assist with this, our team has developed an urban data dashboard to help understand the impact of social distancing measures on people and vehicle movement within a metropolitan city in real time.

The Newcastle University Urban Observatory was established to better understand the dynamics of movement in a city. It makes use of thousands of sensors and data sharing agreements to monitor movement around the city, from traffic and pedestrian flow to congestion, car park occupancy and bus GPS trackers. It also monitors energy consumption, air quality, climate and many other variables.

Changing movement

We have analyzed over 1.8 billion individual pieces of observational data, as well as other data sources, with deep learning algorithms. These inform and update the dashboard in real time.

People Movement Monitoring Dashboard. The Newcastle Urban Observatory

In the graphic above, real-time data from pedestrian sensors is shown as solid lines. The shaded areas are the “normal” pre-lockdown pedestrian flows. Sensors usually monitor pedestrian flows in two directions every hour, which is then compared against the same day from the previous year. Peaks in the graph represent an increased volume of people movement during rush hour. Since the lockdown, however, only very small peaks have been observed overall.

Our research has found that pedestrian movement has reduced by 95% when compared to the annual average. This shows that people have been following government guidelines closely. However, the most profound decrease in footfall only occurred following the strict regulations introduced late on March 23, suggesting that the stronger message had the desired effect.

People Movement Indicator. The Newcastle Urban Observatory

In terms of vehicle movement, traffic reduced at a much slower pace to about 50% of the annual average early in the first week of lockdown. This is possibly due to people shifting to using cars rather than public transport. Overall, we estimate there have been 612,000 lost journeys on public transport since March 1 in Tyne and Wear.

Traffic Movement Indicator. The Newcastle Urban Observatory

Public Health England has also suggested that people stay a minimum of two meters apart when out and about. This advice has been widely advertised, but it is difficult to assess whether it is being followed. Using computer vision and image processing, our team at the Urban Observatory has developed algorithms that can automatically measure social distancing in public areas.

We produced models which can measure the distance between pedestrians in public places. Using a traffic light indicator system, the algorithm is able to anonymously identify and label people who maintain safe distances, while flagging certain instances in red where social distancing measures are violated.

Using this information, it is possible to identify bottlenecks where social distancing cannot be maintained, and how citizens adapt as restrictions are imposed or lifted.

This type of data not only shows how physical distancing is changing in real time, but will also provide detailed insight into long-term behavioral changes.

Tools for the future

World Health Organization expert has claimed that the UK was ten days late in implementing strict social distancing measures. This was perhaps due to a lack of insight into widespread public behavior. Observational infrastructure developed through technology may lie at the heart of future crisis management responses.

The Newcastle Urban Observatory is part of a global movement to develop what are known as smart cities: where embedded sensors provide real-time data on city systems to optimize performance and enable evidence-based decision making.

Smart cities use information and communication technologies to streamline urban operations on a large scale. Technological ecosystems collect traffic, noise, air quality, energy consumption and movement data in order to make improved and sustainable decisions by authorities and enterprises. Citizens can engage with the smart city in a number of ways.

Data authority and governance will be an important point of discussion in future Smart City development. The Urban Observatory is actively researching the governance of smart cities, and applies an ethos of openness and transparency by publishing all the data in real time.

Our analysis of the current situation presents an opportunity to be better prepared for the next crisis, or to quantify the impacts of large-scale social change.The Conversation

This article is republished from The Conversation by Ronnie Das, Lecturer in Digital & Data Analytics, Newcastle University and Philip James, Professor of Urban Data, Newcastle University under a Creative Commons license. Read the original article.

https://www.androidauthority.com/worldgaze-1107546/

This wild video shows what could be the future of mobile voice assistants

Imagine a world where you can look at something, ask a question about it, and immediately get an answer. That’s exactly what researchers at the Human-Computer Interaction Institute at Carnegie Mellon University are developing. The project is called WorldGaze, and it’s pretty amazing.

According to the team, WorldGaze “enhances mobile voice interaction with real-world gaze location.” Holding a WorldGaze-equipped smartphone out in front of you lets you use various voice assistants to engage with your surroundings without providing any additional context.

The software works by simultaneously activating the front and rear cameras on a smartphone, taking in a combined 200-degree field of view. This allows WorldGaze to hone in on where you are looking.

WorldGaze then passes that contextual info to SiriAlexa, or Google Assistant to make voice-activated commands much more powerful. That means you can find out what time a business closes, how much something costs, or even control your smart home gadgets just by looking at something and initiating a “Hey Siri/Google/Alexa” command.

What’s even more interesting about WorldGaze is that it’s a software exclusive solution, meaning it doesn’t require any specific hardware. The team says WorldGaze could launch as a standalone application, but it’s more likely to be integrated as a background service.

Unfortunately, WorldGaze isn’t a product you can use just yet. So far, it’s just a proof of concept, and the team has tested it in streetscapes, retail environments, and smart home settings.

Plus, just the thought of holding a phone in front of me wherever I go makes my arm tired. Thankfully, the team will explore implementing WorldGaze into smart glasses in the future.

http://linuxgizmos.com/raspberry-pi-like-apollo-lake-edge-ai-sbc-launches-with-community-site/

Raspberry Pi like Apollo Lake edge-AI SBC launches with community site

Apr 14, 2020 — by Eric Brown — 4566 views

Adlink’s “Vizi-AI” dev kit for machine vision AI runs Linux, Intel OpenVino, and Adlink Edge middleware on an Apollo Lake based Adlink “LEC-AL” SMARC module running on an Adlink carrier equipped with an Intel Myriad-X VPU.

Adlink, Arrow, and Intel have teamed up on a development kit for entry-level AI industrial machine vision. The Vizi-AI Industrial Machine Vision AI Developer Kit is available on an Arrow shopping page for $199, but is currently sold out, which is sometimes another way of saying it’s on pre-order.

Developers can use the Intel Apollo Lake and Intel Myriad X equipped Vizi-AI board to connect image capture devices and then “deploy and improve machine learning models to harness insight from vision data to optimize operational decision-making,” says Adlink. The idea is that you can then move to a more robust production platform using the same software.

 
ViziAI board with LEC-AL module (left) and Adlink’s almost identical I-Pi board without its LEC-PX30 module
(click images to enlarge)
Vizi-AI is an x86-based variation on Adlink’s Arm-based Industrial-Pi SMARC Dev Kit. The I-Pi kit was announced in February, combining Adlink’s somewhat Raspberry Pi-like Industrial-Pi (I-Pi) carrier board with a new Adlink LEX-PX30 SMARC module with a quad-core, Cortex-A35 Rockchip PX30 SoC. The I-Pi SMARC Development Kit, which also integrates an Intel Movidius Myriad X VPU, is now available for $125.The new Vizi-AI dev kit uses a slightly modified I-Pi carrier with the same Myriad X VPU, but instead of the PX30 module, Vizi-AI taps Adlink’s Intel Apollo Lake based LEC-AL SMARC module.

 
I-Pi SMARC Development Kit detail view
(click images to enlarge)
Shortly after it unveiled the I-Pi kit, Adlink teased (PDF) the Vizi-AI kit and a robotics kit called the Neuron-Pi, similarly said to run on an unnamed SMARC module. Vizi-AI and the ROS-enabled Neuron-Pi kit, which has yet to be fully announced, were said to be part of an AI-on-Modules (AIoM) family of products, which also includes products that provide MXM slots for AI-enabled Nvidia Quadro GPU modules. Adlink has used MXM in other products including its recently announced Matrix MVP-5100-MXM and MVP-6100-MXM edge AI computers based on Intel’s Coffee Lake platform. 

The AI-on-Modules family does not include the PX30 based I-Pi kit, despite using essentially the same I-Pi board with Myriad X and a SMARC module. The I-Pi kit has its own I-Pi community site while the new Vizi-AI is backed up by a similarly maker-oriented GOTO50.ai community site which is currently focused only on the Vizi-AI. GOTO50.ai provides tech support, forums, how-to guides, and “pre-built scenarios,” says Adlink.

This split may reflect the Arm/x86 divide, as well as Vizi-AI’s use of Intel’s OpenVINO AI toolkit and the Adlink Edge edge-to-cloud middleware platform it announced last year. Neither OpenVINO nor Adlink Edge were mentioned in the I-Pi rollout.

 
Amazon Edge conceptual diagram (left) and VizioAI workflow
(click images to enlarge)
The example shown in the ViziAI setup and configuration video farther below uses AWS with Amazon Sagemaker. Last December, Adlink announced a collaboration with Amazon to produce an Adlink AI at the Edge software solution that combines Adlink Edge with an Amazon Sagemaker-built machine learning model optimized by and deployed with Intel’s OpenVINO. The stack is designed to run on Amazon’s AWS Greengrass for local IoT processing in coordination with AWS cloud services.Adlink AI at the Edge also includes Adlink Data River software that is said to translate between devices and applications “to enable a vendor-neutral ecosystem to work seamlessly together.” Although it’s not in the ViziAI announcement, the video below shows AWS integration with ViziAI, as well as the Adlink Data River.
Vizi-AI hardware details

The LEC-AL module used on the ViziAI board was Adlink’s first SMARC module. The 82 x 50mm short SMARC module provides a quad-core Atom x5-E3940 from the Apollo Lake generation with 4GB to 8GB LPDDR4.

 
LEC-AL and its block diagram
(click images to enlarge)
Although the module was announced with 4GB to 8GB eMMC 5.0, there’s no mention of eMMC on the somewhat sketchy spec lists on GOTO50.ai and the Arrow shopping page. Instead, the Vizi-AI board supplies a microSD slot. The Myriad-X VPU is integrated on the carrier board, as well. 

The Vizi-AI carrier provides 2x GbE ports instead of 2x 10/100 Ethernet ports on the I-Pi. On the other hand, the ViziAI has a single HDMI port while the I-Pi has two. (The second one is positioned on small acrylic plate that sits atop the I-Pi’s LEC-PX30 module.)

 
ViziAI (left) and detail view
(click images to enlarge)
Like the I-Pi, Vizi-AI supplies a pair each of USB 3.0 and USB 2.0 host ports and a micro-USB client port. There’s also a 40-pin GPIO connector, but unlike the I-Pi, there are no claims for Raspberry Pi HAT compatibility. 

Other Vizi-AI features include an audio codec and stereo headphone connector. There’s also an optional, ribbon connected single-channel LVDS or eDP interface, which is not found on the I-Pi. The board has a 12V input and adapter. We saw no mention of other I-Pi features such as mic, ADC, and CAN interfaces.

The Vizi-AI runs Debian Linux 9.9 and the Adlink Edge Vision Software Stack, which includes an AI model manager, frame streamer, AWS model streamer, and training streamer. There’s also the Adlink Edge Profile builder and an Intel OpenVINO engine with a range of pre-built OpenVINO compatible machine learning models. Despite the community site, there’s no indication this is an open-spec board.

Further information

The ViziAI Industrial Machine Vision AI Developer Kit is available exclusively from Arrow for $199 but is currently sold out. More information may be found in Adlink’s announcement, as well as on the GOTO50.ai community site and Arrow’s shopping page.

 

http://www.sci-news.com/medicine/mediterranean-diet-cognitive-function-08329.html

Mediterranean Diet May Help Preserve Cognitive Function

Apr 15, 2020

Adherence to the Mediterranean diet correlates with higher cognitive function, according to a new study led by the National Eye Institute (NEI), part of the National Institutes of Health.

The Mediterranean diet emphasizes consumption of whole fruits, vegetables, whole grains, nuts, legumes, fish, and olive oil, as well as reduced consumption of red meat and alcohol. Image credit: Julia Pastel100.

“We do not always pay attention to our diets. We need to explore how nutrition affects the brain and the eye,” said Dr. Emily Chew, director of the NEI Division of Epidemiology and Clinical Applications.

Dr. Chew and colleagues examined the effects of nine components of the Mediterranean diet on cognition.

They analyzed data from the Age-Related Eye Disease Study (AREDS) and AREDS2, which assessed over years the effect of vitamins on age-related macular degeneration (AMD).

AREDS included about 4,000 participants with and without AMD, and AREDS2 included about 4,000 participants with AMD.

The researchers assessed AREDS and AREDS2 participants for diet at the start of the studies.

The AREDS study tested participants’ cognitive function at five years, while AREDS2 tested cognitive function in participants at baseline and again two, four, and 10 years later.

The scientists used standardized tests based on the Modified Mini-Mental State Examination to evaluate cognitive function as well as other tests.

They assessed diet with a questionnaire that asked participants their average consumption of each Mediterranean diet component over the previous year.

Participants with the greatest adherence to the Mediterranean diet had the lowest risk of cognitive impairment.

High fish and vegetable consumption appeared to have the greatest protective effect.

At 10 years, AREDS2 participants with the highest fish consumption had the slowest rate of cognitive decline.

The numerical differences in cognitive function scores between participants with the highest versus lowest adherence to a Mediterranean diet were relatively small, meaning that individuals likely won’t see a difference in daily function.

But at a population level, the effects clearly show that cognition and neural health depend on diet.

The authors also found that participants with the ApoE gene, which puts them at high risk for Alzheimer’s disease, on average had lower cognitive function scores and greater decline than those without the gene.

The benefits of close adherence to a Mediterranean diet were similar for people with and without the ApoE gene, meaning that the effects of diet on cognition are independent of genetic risk for Alzheimer’s disease.

The study was published in the journal Alzheimer’s and Dementia.