http://bgr.com/2018/04/17/google-diy-ai-kits-vision-kit-voice-kit-aiy/

Google’s new DIY AI kits could help shape the future

The “technology of the future” is constantly changing. Remember when Google Glass was the future and venture capitalists launched huge funds dedicated to financial Glass app developers? Remember when chat bots were the future and every tech company on the planet burned time and money developing Facebook Messenger bots? Yeah, most “technologies of the future” end up being little more than silly trends that vanish almost as quickly as they arrived. This time around, however, things feel like the might be a bit different when it comes to the latest “technology of the future.” Why? Because artificial intelligence really is the tech of the future.

Tech companies large and small have hopped on the AI bandwagon, but this time for good reason. AI and machine learning are integral to technologies of the future, potentially giving computers capabilities that we can’t even imagine right now. And Google has found a new role to play by equipping budding engineers with the tools they need to learn about AI and build their own AI solutions. No, this isn’t the start of the robot uprising. It’s the start of a new “AIY” initiative at Google that will offer comprehensive DIY kits for people — mainly students — who want to experiment with and learn about different AI solutions.

Google just announced two new “AIY” (it’s like DIY, but for artificial intelligence) kits that build upon the ideas the company set forth with its first-generation kits. This time around, however, the new kits ship with everything a student might need to build AI solutions, including a Raspberry Pi Zero WH board.

“We’re taking the first of many steps to help educators integrate AIY into STEM lesson plans and help prepare students for the challenges of the future by launching a new version of our AIY kits,” Billy Rutledge, Director of AIY Projects at Google, wrote in a blog post. “The Voice Kit lets you build a voice controlled speaker, while the Vision Kit lets you build a camera that learns to recognize people and objects. The new kits make getting started a little easier with clearer instructions, a new app and all the parts in one box.”

He continued, “To make setup easier, both kits have been redesigned to work with the new Raspberry Pi Zero WH, which comes included in the box, along with the USB connector cable and pre-provisioned SD card. Now users no longer need to download the software image and can get running faster. The updated AIY Vision Kit v1.1 also includes the Raspberry Pi Camera v2.”

Here’s a video of the Vision Kit in action:

This is a very cool example of a tech company taking some initiative to help encourage communities to enhance their STEM programs in schools. Google’s new AIY Voice Kit and Vision Kit are already available online at Target.com and in Target stores across the country, and Google hopes to offer them in other regions in the coming months. The Voice Kit is available for $49.99, while the more complex Vision Kit costs $89.99.

http://www.iphoneincanada.ca/news/apple-pay-canada-vancity/

Apple Pay Expands in Canada: Vancity Credit Union and More

Apple Pay has expanded one again in Canada, this time reaching more credit unions, such as Vancity in the Lower Mainland in British Columbia for Interac debit cards.

Other additions include Assiniboine Credit Union, Cambrian Credit Union Limited and Steinbach Credit Union in Manitoba.

Cambrian credit union apple pay

Cambrian Credit Union has the Apple Pay announcement on their website, while the others haven’t acknowledged the launch officially yet online, despite the iPhone maker’s listing of these credit unions on the web.

According to iPhone in Canada reader Glenn, he was able to verify and add his Vancity Credit Union to Apple Pay, despite being told by their call centre this was a soft launch for employees.

Just under two weeks ago, Apple Pay similarly expanded in Canada to reach Island Savings and Valley First Credit Unions.

Let us know if you’re able to add your debit card to Apple Pay from one of these credit unions.

With Apple Pay readily available from Canada’s big banks and most major credit card issuers, Canadians are probably more interested in the day when Apple Pay Cash will make its way here.

http://www.kurzweilai.net/a-future-ultraminiature-computer-the-size-of-a-pinhead

A future ultraminiature computer the size of a pinhead?

Future ultrahigh-storage-density MRAM memory chip promises to outperform RAM and flash memory for AI, IoT, and 5G applications and reduce power needs in data centers
April 16, 2018

Thin-film MRAM surface structure comprising one-monolayer iron (Fe) deposited on a boron, gallium, aluminum, or indium nitride substrate. (credit: Jie-Xiang Yu and Jiadong Zang/Science Advances)

University of New Hampshire researchers have discovered a combination of materials that they say would allow for smaller, safer magnetic random access memory (MRAM) storage — ultimately leading to ultraminiature computers.

Unlike conventional RAM (read-only memory) SRAM and DRAM chip technologies, with MRAM, data is stored by magnetic storage elements, instead of energy-expending electric charge or current flows. MRAM is also nonvolatile memory (the data is preserved when the power if turned off). The elements are formed from two ferromagnetic plates, each of which can hold a magnetization, separated by a thin insulating layer.

In their study, published March 30, 2018 in the open-access journal Science Advances, the researchers describe a new design* comprising ultrathin films, known as Fe (iron) monolayers, grown on a substrate made up of non-magnetic substances —  boron, gallium, aluminum, or indium nitride.

Ultrahigh storage density

The new design has an estimated 10-year data retention at room temperature. It can “ultimately lead to nanomagnetism and promote revolutionary ultrahigh storage density in the future,” said Jiadong Zang, an assistant professor of physics and senior author. “It opens the door to possibilities for much smaller computers for everything from basic data storage to traveling on space missions. Imagine launching a rocket with a computer the size of a pin head — it not only saves space but also a lot of fuel.”

MRAM is already challenging flash memory in a number of applications where persistent or nonvolatile memory (such as flash) is currently being used, and it’s also taking on RAM chips “in applications such as AI, IoT, 5G, and data centers,” according to a recent article in Electronic Design.**

* A provisional patent pending has been filed by UNHInnovation. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences.

** More broadly, MRAM applications are in consumer electronics, robotics, automotive , enterprise storage, and aerospace & defense, according to a market analysis and 2018–2023 forecast by Market Desk.


Abstract of Giant perpendicular magnetic anisotropy in Fe/III-V nitride thin films

Large perpendicular magnetic anisotropy (PMA) in transition metal thin films provides a pathway for enabling the intriguing physics of nanomagnetism and developing broad spintronics applications. After decades of searches for promising materials, the energy scale of PMA of transition metal thin films, unfortunately, remains only about 1 meV. This limitation has become a major bottleneck in the development of ultradense storage and memory devices. We discovered unprecedented PMA in Fe thin-film growth on the Embedded Image N-terminated surface of III-V nitrides from first-principles calculations. PMA ranges from 24.1 meV/u.c. in Fe/BN to 53.7 meV/u.c. in Fe/InN. Symmetry-protected degeneracy between x2 − y2 and xy orbitals and its lift by the spin-orbit coupling play a dominant role. As a consequence, PMA in Fe/III-V nitride thin films is dominated by first-order perturbation of the spin-orbit coupling, instead of second-order in conventional transition metal/oxide thin films. This game-changing scenario would also open a new field of magnetism on transition metal/nitride interfaces.

 

references:

  • Jie-Xiang Yu and Jiadong Zang. Giant perpendicular magnetic anisotropy in Fe/III-V nitride thin films. Science Advances. 30 Mar 2018: Vol. 4, no. 3, eaar7814. DOI: 10.1126/sciadv.aar7814

https://www.forbes.com/sites/forbestechcouncil/2018/04/17/reaping-the-riches-of-the-coming-singularity/#4db62fe81e94

Reaping The Riches Of The Coming Singularity

There was a time when enterprises introduced technology as an afterthought into operations. However, the last two decades have seen technology morph into a formidable force equipped to assist companies with IT operations and turbocharge the way entire enterprises are run. In fact, we are now seeing an exponential increase in the levels of disruption where enterprises are unable to adopt new tech into their core business offerings.

I believe this development confirms what experts have been predicting. Namely, the coming of singularity. And by singularity, I don’t mean a definite point when the world will be transformed far more by machine-based intelligence than it is now — I mean singularity as a continuous process of exponential change driven by the unprecedented growth of technology. This trend calls for a more iterative approach to business, and I see it promising companies across domains several previously unforeseen opportunities.

If you look around, you will find several examples of enterprises leveraging this trend by introducing hitherto unheard-of capabilities. Take, for example, the AI-based personal assistants from Apple and Amazon — Siri and Alexa. Both are equipped to carry out a range of tasks in response to voice commands that can be applied to several front-, middle- and back-office functions. Similarly, narrow-scoped AI is being adopted by various other companies, including other tech giants like Facebook and Google, to monitor customer interfaces — with a continuity inconceivable for a human workforce — to help customize products in real time and enhance the customer experience.

Elsewhere, advances in machine learning technologies have created deep neural learning networks that decipher, translate and respond to linguistic cues iteratively. In the financial sector, this has led to the emergence of chatbots. Essentially, automated chat systems, which run on AI, serve as bots that stand in for humans by quickly learning and adapting to a user’s emotional cues and responding to expressed current requirements faster and more effectively than them.

The success of such bots in financial services has encouraged banks to step up its AI-adoption. Bank of America, for example, has officially begun putting Erica, its virtual assistant, to use. The AI-empowered assistant will provide tailor-made financial suggestions over mobile phones to provide advice on the go to customers with an aim to help improve their financial overview.

Banks are also leading from the front in embracing the transformational possibilities of technological development by deploying robo-advisory services as an extension of wealth management. Several major banks and financial firms have deployed these cognitive-enabled machine-learning tools to provide customers with up-to-date information when they need it, nearly instantly, while delivering valuable insights to banks on clients they have so far overlooked for services, including retirement plans, pension funds and health insurance.

Futurist visionary Ray Kurzweil, who is credited with having given the concept of singularity a lot of its current currency, is reported to have said that the technological singularity he famously predicted in his The Singularity Is Near book will become a reality by 2045.

While Kurzweil’s predictions may seem outlandish and controversial, the crux of the idea that lies in his prediction regarding the exponential growth in computing power is already here and transforming all walks of life.

This is taking place not only in Apple’s fusion of man and machine but also in the proliferation of sensors in smart homes, smart workplaces, hospitals and factory floors. At one level, this simply confirms Moore’s Law, which holds that as technology improves, smaller devices will hold greater capabilities, sophistication and power.

However, at another level, the advancements we are seeing around us are actualizing the potential of humanity to focus on what distinguishes us as creative, expressive and social beings. And this is where Kurzweil’s forecast, which is also our point of view, is of immediate relevance.

In that event, companies in all industries will need a roadmap for the technological transformation that lies ahead. With the fastidious millennial in the driving seat of nearly every customer-facing business, organizations will have to realign themselves to thrive in this new world that is moving steadily toward a technological singularity.

This will involve embracing change by handing over various tasks and functions to AI and cognitive computing to deliver two outcomes: 1) identify and fulfill customer requirements more iteratively and precisely and 2) free up the skilled workforce to deliver what Kurzweil would call “higher level” human functions — a key differentiator that would help businesses stand out from the competition.

These higher-level functions would include the inimitable human capacity to conduct in-person meetings and to deliver insightful, customized advice and individualized plans.

In my view, it is important to understand that the future of tech, including its possible evolution to a singularity, is all too human and hyper-personalized. What I mean by this is that irrespective of the speed of technological change and irrespective of how complex and powerful computing algorithms can get, new developments will be guided by human needs. This brings me to the second part of my view regarding the future of technology (hyper-personalization).

As is abundantly evident in all industries, cutting-edge technology — including natural-language processing, deep-learning networks, AI and IoT — is helping companies across industries come even closer to customers to improve their lives rather than push organizations away from customers on some context-less, high-end, technological craft.

Currently, Alexa is expanding our understanding of what we consider possible by understanding human language, responding to queries and carrying out personal tasks discreetly. I believe the day is not far when humans will leverage a technological singularity to reclaim what has always been their undisputed domain — consciousness. This will be a truly iterative type of learning based on millennia of painstaking human evolution and the distinct ability to relate to, empathize with and connect with fellow humans.

 

https://www.cnet.com/news/google-aiy-kits-come-to-target-with-a-raspberry-pi-included/

Google includes a Raspberry Pi in a DIY smart speaker kit

The updated kits are rolling out to Target and include everything you need to build your own smart speaker or smart camera.

aiy-vision-kit-open2
The inside of Google’s updated Vision Kit

Google

Google wants to make it easier than ever to build your own smart home gadgets. The search giant’s latest kits hit Target stores this month and include everything you need to make either a smart speaker similar to a Google Home or a smart home camera that can recognize faces and expressions.

The kits are called the AIY Voice and AIY Vision kits respectively. AIY stands for “Artificial Intelligence Yourself.” It’s a take on Do It Yourself (DIY) for smart tech. The idea of both kits is that you can use the included pieces to build a fairly advanced piece of tech on your own.

Google might be hoping that with these tools, its community of developers will be able to find ways to make smart speakers and smart cameras even smarter. In Monday’s blog post announcing the updated Kits, Google also expressed an interest in helping teach students computer science skills.

Google previously released both kits in 2017, but the limited releases did not include some of the necessary pieces such as the actual processing unit. Both kits are meant to be used with the Raspberry Pi brand of microcomputer popular with programmers, but you had to buy the Raspberry Pi separately.

The updated kits will include a Raspberry Pi processing unit (specifically the Raspberry Pi Zero WH) and the Vision Kit will include a Raspberry Pi camera as well. Clearer directions will also be included in the box. Plus, Google’s releasing a companion app to walk you through the process of making your own smart gadget, and Google’s AIY website will have updated documentation.

Head here for more details on what exactly the kits include. If you want to build your own smart home gear, Google’s promising both will be available at Target stores and on Target’s website this month.

https://motherboard.vice.com/en_us/article/bjpb78/diy-computing-cluster-esp32-raspberry-pi

How to Build a Mini Supercomputer for Under $100

Wei Lin built a scalable computing cluster comprised of $7 chips.

Image: Wei Lin/Github

Supercomputers are used by governments and research institutions around the world to solve some of science’s most complex problems, such as hurricane forecasting and modeling atomic weapons. In most cases, a supercomputer is actually a computing cluster that is comprised of hundreds or thousands of individual computers that are all linked together and controlled by software. Each individual computer is running similar processes in parallel, but when you combine all of their computing power you end up with a system that is far more powerful than any single computer by itself.

Supercomputers often take up the size of a basketball court and cost hundreds of millions of dollars, but as Github user Wei Lin has demonstrated, it’s possible to make a homebrew computing cluster that doesn’t break the bank.

As detailed on Wei Lin’s Github repository, they managed to make a computing cluster using six ESP32 chips. These chips are microcontrollers—a computer with minimal memory and processors—similar to a Raspberry Pi, but far cheaper.

Read More: Physicists Imagine a Supercomputer Based on ‘Magic Dust’

A single Raspberry Pi costs around $30, an ESP32 only costs about $7 (this is because they are manufactured in China, while Arduino and Raspberry Pis are manufactured in Europe). So even though others have made computing clusters from Raspberry Pis—including a 750 node cluster made by Los Alamos National Lab—these can quickly become expensive projects for the casual maker. Lin’s six node cluster, on the other hand, cost the same as a single Pi and has three times as many cores.

The main challenge, according to Lin, was figuring out how to coordinate computing tasks across each of the chips. For this, they used a program called Celery that is optimized for synchronizing computing tasks across several cores.

In a video Lin demonstrated using a three node cluster to run a word count program. As detailed by Lin, the computer’s software basically dispatches a list of tasks to the cluster—in this case having to do with word counts—and then the nodes each retrieve a task from the list, execute it, and then return a result before retrieving a new task from the list. At the same time, the nodes are communicating with one another to coordinate their efforts.

While you probably won’t solve the toughest problems in physics by scaling this computer cluster architecture, it is a pretty neat application for inexpensive hardware that is capable of quickly performing computations in parallel and is a nice way to learn how supercomputers actually work without breaking the bank.

https://parkinsonsnewstoday.com/2018/04/17/aan2018-neural-stem-cells-show-promise-parkinsons-therapy/

#AAN2018 – Neural Stem Cell-Based Therapy is Safe, May Benefit Patients, Early Test Results Suggest

#AAN2018 – Neural Stem Cell-Based Therapy is Safe, May Benefit Patients, Early Test Results Suggest

Human neural stem cells show promise as therapy for Parkinson’s disease, according to six-month interim results of Phase 1 trial.

The therapy’s developer, International Stem Cell’s Corporation subsidiary Cyto Therapeutics, will present the results at the American Academy of Neurology annual meeting in Los Angeles, April 21-27. Parkinson’s News Today will be covering the conference.

The presentation, taking place during the Scientific Platform Sessions, will be at 5:40 p.m. Pacific time on Tuesday, April 24. The title will be “Interim Clinical Assessment of a Neural Stem Cell Based Therapy for Parkinson’s Disease.”

Human parthenogenetic neural stem cells (ISC-hpNSC) is a cellular therapeutic that can not only differentiate into dopaminergic neurons, but can also release brain-protecting agents, offering a new approach to treat Parkinson’s disease.

According to the company, a one-time transplant of ISC-hpNSC into the brain of Parkinson’s patients, replaces the dead and dying dopaminergic neurons and can offers protection to the remaining neurons, alleviating disease symptoms and preventing further deterioration.

Preclinical studies have shown that administration of ISC-hpNSC is safe and can improve motor symptoms, increase dopamine levels, innervation and number of dopaminergic neurons in Parkinson’s animal models.

The Phase 1 trial (NCT02452723), currently recruiting participants, is an open label, single-arm study evaluating the safety, tolerability and preliminary effectiveness of transplanting ISC-hpNSC into Parkinson’s patients.

A total of 12 patients are divided into three groups, each injected with 30, 50 or 70 million of the company’s ISC-hpNSC, respectively. Cells are injected directly into the striatum and substantia nigra regions of the brain, regions known to be directly affected in Parkinson’s disease.

Patients will be evaluated for 12 months with a five-year, long-term follow-up. The study’s primary objective is to assess the therapy’s safety by measuring the incidence of treatment-related adverse events.

Additionally, researchers will evaluate potential effectiveness by assessing how the therapy affects patients’ scores on the Unified Parkinson Disease Rating Scale (UPDRS), which measures course of disease, after 12 months and compared it to baseline.

“Four patients of the first cohort and 2 patients of the second cohort have been successfully transplanted with 30 and 50 million ISC-hpNSC cells respectively,” researchers wrote.

Results from a six-months analysis after cell transplantation revealed that the therapy, in general, is safe, with no signs for induced dyskinesia (impairment of voluntary movement), tumors, infection or other serious adverse events reported so far.

Also, patients in the group injected with the lower number of human neural stem cells, 30 million, showed a median 53% improvement in the Questionnaire for Impulsive-Compulsive Disorders, and an average increase of 35% in the Beck Depression Inventory, and 16% in the Parkinson’s Disease Quality of Life Score-39 (PDQ-39).

Importantly, the therapy resulted in a 25 percent reduction in patients’ off time — the period when levodopa therapy begins to fail and Parkinson’s symptoms return.

“Interim results of the world’s first pluripotent stem cell based therapy for PD show that transplantation of ISC-hpNSC is safe, well tolerated and can potentially benefit patients,” researchers wrote.

The study is ongoing at The Royal Melbourne Hospital, Australia.

https://news.ubc.ca/2018/04/16/will-the-vehicles-of-the-future-be-powered-by-electricity-or-hydrogen/

Will the vehicles of the future be powered by electricity or hydrogen?

The Globe and Mail interviewed Walter Merida, director of the Clean Energy Research Centre at UBC, for a story on battery-electric and hydrogen-powered cars.

“It’s not really a competition – they’ll both co-exist, and there will also be plug-in hydrogen hybrids. Battery-electric vehicles are better for an urban environment where you have time to recharge and fuel-cell electric vehicles are better-suited for long range and heavy duty,” said Merida.

https://news.ubc.ca/2018/04/16/zero-waste-mobile-phones-come-closer-to-reality/

Zero-waste mobile phones come closer to reality

Business Standard highlighted work by UBC engineers that brings the world closer to the goal of a zero-waste cellphone.

“Discarded cellphones are a huge, growing source of electronic waste,” said lead researcher Maria Holuszko, who worked to perfect a process to efficiently separate fibreglass and resin.

A similar story appeared on Science Daily.

https://arstechnica.com/gadgets/2018/04/the-apple-watch-may-support-third-party-watch-faces-in-the-future/

The Apple Watch may support third-party watch faces in the future

Apple currently sanctions all watch faces for its wearable, even the Disney ones.

Megan Geuss

Third-party watch faces for smartwatches allow users to express more of themselves while also letting them have a bit more fun with their tech. The Apple Watch already has a number of Apple-made watch faces, many of which are customizable, but third-party developers haven’t been able to make their own. A report from 9to5Mac suggests that might change soon, thanks to code found in watchOS 4.3.1 hinting at third-party watch face compatibility.

The interesting log message states: “This is where the 3rd party face config bundle generation would happen.” It’s part of the NanoTimeKit framework in the wearable software beta, which gives developers access to watch face components. While the feature doesn’t appear to be active yet, it seems to refer to an inactive developer tool server that may allow communication with Xcode on macOS.

It’s unclear if Apple would make this feature active in watchOS 5, the next version of the Apple Watch’s software that’s expected to be announced at WWDC this June. Even if Apple doesn’t announce it as a feature in watchOS 5, the mere mention of it means it’s possible that the company would allow third-party developers to create clock faces for its wearable some time in the future.

Third-party watch faces are staples for most smartwatches, as they allow both developers and users to get creative with the default screen. Wearable operating systems including Garmin’s OSFitbit OS, and Wear OS all have numerous third-party watch faces to choose from. Apple, however, has never allowed third-party developers to create watch faces for the Apple Watch. While some existing options derive content from other sources, like the Photos app for the custom Photo face or a Disney collaboration for the Toy Story watch faces, those are all still Apple-created designs.

Apple prefers to control most user-facing design features of its software, which is likely why the company hasn’t allowed third-party watch faces yet. Considering the code found in watchOS 4.3.1 appears to be a placeholder, Apple may be figuring out how to best implement third-party watch faces to give developers the freedom they’ve been craving while also maintaining a level of clarity and usability in its flagship wearable.