MIT and Harvard create soft robotic muscles that can lift 1,000 times their weight
Researchers at the MIT Computer Science and Artificial Intelligence Laboratory and Harvard’s Wyss Institute have developed soft robotic muscles that can lift up to 1,000 times their own weight. The technology is inspired by the Japanese art of paper folding – it uses an origami-like skeleton encased in an air- or liquid-filled bag. To get the muscle to expand or contract like an arm, one need only reduce or increase the pressure inside the bag.
The soft robot’s internal skeleton can be constructed from a variety of materials, and its range of flexibility and motion is determined by its folds. While this means that the soft robots cannot be reprogrammed once their “folds” have been put in place, as The Verge writes, it’s not really a major limitation. Indeed, algorithms can be used to find origami patterns that fold in near-infinite ways, including more complex motions such as twisting. The low cost (muscles can be built from a range of affordable, readily available materials) and speed of production also mean that they can be quickly fabricated and easily repaired to suit.
The soft robot muscles can also be constructed in a range of sizes from a centimeter to a meter to increase strength—i.e. the bigger the muscle, the bigger the lift—and joined together to create more elaborate systems. As seen above, a combination of four muscles forms an arm and a grip that can pick up a tire. Additional muscles could be added to offer horizontal movement to the lift, allowing the tire to be placed in different locations.
Researchers see the soft robot technology being applied to medical assistance devices, space exploration, wearable exoskeletons, and of course, within warehouses and logistic operations, where they could handle fragile or unusually shaped objects. Researchers are also in the midst of building an elephant trunk “as flexible and powerful” as the real thing. As Professor Daniela Rus, CSAIL’s director, told Wired, “I like the elephant trunk because it’s such a sophisticated manipulation mechanism.”
CompuLab’s IOT-GATE-RPi mini-PC/gateway builds on the RPi CM3 with 2x GbE, RPi HAT expansion, and optional WiFi, BT, 3G, LTE, and -40 to 80°C support.
CompuLab has added another Linux-friendly member to its IOT-GATE family of ultra-compact mini-PCs, following the IOT-GATE-iMX7, which is built around its NXP i.MX7 based CL-SOM-iMX7 computer-on-module. For the new IOT-GATE-RPi, CompuLab opted for a third-party COM: the popular Raspberry Pi Compute Module 3, a COM version of the Raspberry Pi 3.
IOT-GATE-RPi front and back — left side of rear view shows RJ11 connectors on an optional HAT board featuring RS485 and CAN
(click images to enlarge)
Starting at $110 in volume, the 112 x 84 x 25mm IOT-GATE-RPi IoT gateway is only slightly larger than the IOT-GATE-iMX7. It follows several other compact computers based on the RPi CM3, such as Dek Italia’s Telegea Smart Hub DIN-rail computer and Distec’s POS-Line IoT panel PC.The fanless, 450-gram IOT-GATE-RPi is available in 0 to 60°C, -20 to 60°C, and -40 to 80°C temperature ranges. Its rugged metal housing is claimed to offer unspecified levels of shock, vibration, and dust resistance.
The IOT-GATE-RPi provides a wide 10-36V input range, DC-plug locking, and optional DIN-rail, wall, and VESA mounting support. There’s also an RTC with back-up battery, as well as hardware protection against unauthorized boot from external storage.
IOT-GATE-RPi front/rear panel closeups
(click image to enlarge)
Raspberry Pi
Compute Module 3
The Raspberry Pi Compute Module 3 contributes its quad-core, Cortex-A53 Broadcom BCM2837 SoC with VideoCore IV GPU — the same as on the Raspberry Pi 3 — plus 1GB LPDDR2 RAM. The standard microSD slot can be replaced with up to 64GB of soldered eMMC storage. This would suggest that CompuLab is using the $25 Lite version of the CM3, which lacks the usual 4GB of onboard eMMC.
The IOT-GATE-RPi features dual 10/100 Ethernet ports, 4x USB 2.0 host ports, an audio out jack, and an HDMI 1.3 port with audio support and up to HD resolution. The system is further equipped with an ultra-mini RS232 port.
IOT-GATE-RPi exploded view (left), and bottom side view of system’s RPi carrier board showing its 40-pin HAT expansion connector and an optional mini-PCIe modem module
(click images to enlarge)
A 40-pin Raspberry Pi expansion connector that supports RPi HAT add-ons is available on the lower side of the RPi CM3 carrier board (pictured above). You can populate this connector with the optional EB-RPI-FCSD HAT board that offers RS485 and CAN ports, both of which have RJ11 connectors. This HAT board would also appear to include the optional 6x 5V tolerant DIO feature, which is said to be FCSD-HAT based.The carrier board also provides a mini-PCIe expansion connector for non-PCIe options such as a WiFi/Bluetooth 4.1 (Broadcom BCM43438) module, as well as 3G and 4G/LTE radios based on Simcom SIM5360 and SIM7100 modules, respectively. A micro-SIM card slot is also available.
The system’s modular I/O side-panel design enables “easy incorporation of HAT add-ons into the device” says Compulab. Additionally, Options such as a HAT, mini-PCIe card, and micro-SIM can be accessed via the system’s removable bottom cover.
In addition to the above-mentioned options, there’s an evaluation kit with a 45-day free trial and a year of tech support. This includes a variety of cables, adapters, and mounting brackets, as well as a WiFi antenna.
The IOT-GATE-RPi offers full compatibility with Raspberry Pi software, and runs standard Raspberry Pi OS images, says CompuLab. Supported OSes include Debian, Ubuntu Core, and Windows 10 IoT Core. The system also supports IoT frameworks like Microsoft Azure IoT and AWS Greengrass.
Further information
The IOT-GATE-RPi will be available in December starting from $110 for volume orders. More information may be found at CompuLab’s IOT-GATE-RPi announcementand product page.
MIT’s new thermal battery releases heat on demand — with light
Scientists commonly approach thermal storage with a phase change material (PCM): when heat melts the PCM, it changes from solid to liquid stores energy, according to MIT. When it’s cooled and changes back into a solid, it releases the stored energy as heat. But all current PCMs need a lot of insulation, and MIT said they go through “that phase change temperature uncontrollably, losing their stored heat relatively rapidly.”
Researchers overcame challenges to thermal storage with a system drawing on molecular switches that alter shape in response to light. They integrated these molecules into traditional PCM materials to release heat on demand. MIT professor Jeffrey Grossman said in a statement, “By integrating a light-activated molecule into the traditional picture of latent heat, we add a new kind of control knob for properties such as melting, solidification, and supercooling.”
Their chemical heat battery could harness solar heat and potentially even waste heat from vehicles or industrial processes. With the system, heat could stay stable for at least 10 hours – and a device of around the same size storing heat directly would release it in just a few minutes. The MIT material can store around 200 joules per gram. Postdoctoral researcher Grace Han said there’s already been some interest in their thermal battery for use in cooking in rural India.
Alphabet’s DeepMind Is Trying to Transform Health Care — But Should an AI Company Have Your Health Records?
The London company’s work with hospitals has raised privacy concerns and threatened its ambitions.
By
Jeremy Kahn
DeepMind, the digital brain foundry owned by Google’s parent company, Alphabet,wants to use artificial intelligence to solve… well, everything. Last year, its software taught itself to play the strategy game Go better than any human on the planet. For its next trick, it wants to move beyond games to a very real-world problem: health care.
The London company has a fast-growing division—now 100 strong—dedicated to health. And while DeepMind’s research on Go may be years away from yielding practical applications, its health-care work is affecting people’s lives today through projects with the U.K.’s National Health Service. These include a mobile app to alert doctors and nurses to changes in a patient’s condition and efforts to research whether computers can analyze various kinds of medical imagery as well as experienced doctors. The company believes AI has the power to save lives. But DeepMind’s maiden voyage into the field has also run smack into an iceberg of privacy and ethical concerns—and the resulting controversy has threatened to sink its ambitions of using AI to transform health care.
In July, after a year-long investigation, U.K. regulators ruled that London’s Royal Free Hospital had illegally provided DeepMind access to 1.6 million patient records going back five years. DeepMind said it needed the records to conduct safety testing of its first product, a mobile app that gives doctors and nurses instant access to medical records and can alert them to patients at risk of deterioration. The first potentially fatal condition DeepMind built an alert for was acute kidney injury (AKI). The Royal Free said it accepts the decision, but disagrees it could have tested the mobile app, called Streams, another way.
Regulators took no action against DeepMind, ruling that it acted on the Royal Free’s instructions. But the company acknowledges it made mistakes. “In our determination to achieve quick impact when this work started in 2015, we underestimated the complexity of the NHS and of the rules around patient data, as well as the potential fears about a well-known tech company working in health,” Mustafa Suleyman, DeepMind’s co-founder and head of its health projects, and Dominic King, the former surgeon who serves as the company’s top clinician, said in a statement.
DeepMind’s stumble has implications far beyond one AI research firm. Tech companies are flooding into health care. IBM claims its Watson artificial intelligence software can help doctors find the best treatments for cancer. Genome pioneer J. Craig Venter’s latest startup, Human Longevity Inc., wants to customize treatments for each patient’s DNA.
Even DeepMind Health is just one of three big health-care bets Alphabet is making. It also owns Verily, which creates medical device software, and Calico, which is trying to stretch human lifespans. Success or failure matters to more than just corporate bottom lines: the U.K.’s NHS is counting on technology to cope with an aging population and shrinking budgets that threaten to bankrupt the entire system. But if AI is going to fulfill the optimists’ hopes, the companies behind it must prove they are trustworthy.
And when it comes to Big Tech, trust is in increasingly short supply. From revelations about Russian meddling in the U.S. presidential election to new disclosures about how Apple and Google have avoided paying taxes, tech companies are no longer seen as benign — or even neutral — entities. DeepMind, founded in 2012, may still see itself as a startup, but it is a part of Alphabet, a huge conglomerate. And while Silicon Valley’s clichéd “move fast, break stuff” ethos might have worked in the past, DeepMind is discovering that sometimes it really isn’t better to ask for forgiveness than permission.
Mustafa Suleyman
Photographer: John Phillips/Getty Images
The driving force behind DeepMind’s push into medicine is Suleyman, who everyone at DeepMind calls “Moose” (an abbreviation of his first name). Suleyman’s mother was a nurse. After Google bought DeepMind in 2014 for a reported 400 million pounds, he quickly homed in on health care. “There is no other area where we invest so much money in technology and get so little back,” Suleyman said in an interview in mid-August.
Press coverage of DeepMind’s health-care efforts sometimes makes it seem as if the company is developing a software version of Hugh Laurie’s character in “House,” a diagnostic genius able to deduce the solution to any medical mystery. But Suleyman said this is “total nonsense.” “We are going to be solving all kinds of other magical problems in the world before we get to that sort of general diagnostician,” he said.
Hints of what DeepMind Health does want to do can be gleaned from a project at London’s Moorfields Eye Hospital. Here, in an office cluttered with thick medical tomes, Pearse Keane is staring at an image on his laptop. Keane is a senior ophthalmologist and clinical researcher. The picture is a patient’s retina imaged with optical coherence tomography, or OCT. “It allows us to see things like bleeding and leakage into your retina and diagnose the most common causes of blindness,” Keane said.
Moorfields and DeepMind are trying to see if a computer, using AI, can read OCT scans as well as Keane. The project has achieved “impressive” results, Keane said, but he wasn’t ready to talk about them yet. He and DeepMind hope to publish their research in the coming months. The company has also announced research projects with two London universities to see if AI software can learn to read head and neck scans and mammography scans as well as or better than doctors.
Still, DeepMind said a commercial product using AI is a ways off. Streams, the only product DeepMind has actually deployed, uses no AI. While DeepMind originally set out to use machine learning to improve an existing NHS algorithm to detect AKI, it said it never carried out that research. When DeepMind visited the Royal Free, it found the existing algorithm— which wasn’t half bad— was the least of the problem. Of far more concern were antiquated technology and Byzantine workflows that meant it took too long for doctors and nurses to act on blood test results. The real problems in medicine “are much more gritty and practical,” Suleyman said.
Those pragmatic concerns are front-and-center on the ninth floor of the Royal Free hospital in early August, when a patient’s kidneys suddenly start struggling after a liver transplant. Within seconds of a lab pathologist entering blood test results into a computer database, they are analyzed by a formula the NHS developed, and an alert sounds on nurse Sarah Stanley phone. Opening the Streams app, she sees a graph showing spiking indicators from the blood tests. Using the app, she messages a colleague to check on the patient.
“We have just triaged that patient in less than 30 seconds,” she said. In the past, the process would have taken up to four hours. A few hours delay, Stanley said, can be critical: patients with AKI can deteriorate rapidly.
DeepMind said it wants to move on from the controversy over Streams’ development. In November 2016, it replaced its original information sharing agreement with the Royal Free with a new five-year contract designed to address the initial deal’s failings. Since then, the company has published, with a few redactions, copies of its contracts with hospitals. It set up and funded a panel of outside reviewers to investigate its work and report publicly each year. The company also announced it will create a digital ledger system—similar to the blockchain technology that underpins the cryptocurrency bitcoin—that would give NHS hospitals a tamper-proof audit trail of who has accessed patient data.
But none of this has assuaged the company’s detractors. Julia Powles is a law professor at the University of Cambridge who has written critically of DeepMind’s initial agreement with the Royal Free. She said DeepMind’s new contract does not explicitly prevent it from transferring patient data to sister company Google. DeepMind said it has not and never would give any data to Google. “If they can’t put that in writing I find it hard to believe them,” Powles said.
She questions whether DeepMind was the best choice given that Streams doesn’t use AI, and thinks that other companies should have been allowed to bid on the project. And while DeepMind has been offering Streams to hospitals for free, Powles wonders if the company has an unfair advantage because it is backed by Alphabet and can afford to lose money on Streams. She said there was a danger hospitals will get locked into technology that they won’t be able to afford if DeepMind eventually decides to charge a market rate.
The Royal Free is not entitled to any money DeepMind makes from Streams. But John Bell, a doctor who chairs the U.K.’s Office of Strategic Coordination of Health Research, recently recommended that the NHS retain an economic interest in any artificial intelligence developed using its data. “Our projects so far have assumed the right way to give value back to the public is through initially providing our resources and technologies to our NHS partners for free,” DeepMind said in an emailed response, adding that it was open to discussion of other ways of valuing its services. The Royal Free said it in a statement that it was “happy with the terms of the agreement” with DeepMind.
Controversy hasn’t stopped DeepMind from signing agreements to deploy Streams to additional NHS hospitals. Suleyman said the company has received interest from U.S. doctors, too. But privacy concerns may have dented DeepMind’s hopes of integrating AI into its health-care offerings anytime soon. Taunton & Somerset NHS Foundation Trust, one of the hospitals now adopting Streams, explicitly ruled out doing anything with artificial intelligence, Tom Edwards, the hospital’s joint clinical information officer said. Nicola Perrin, the head of Understanding Patient Data, a project run by the British health charity Wellcome Trust, worries that what happened to the Royal Free might deter U.K. hospitals from adopting potentially life-saving technology. “I think it is very important that we don’t get so hung up on the concerns and the risks that we miss some of the potential opportunities of having a company with such amazing expertise and resources wanting to be involved in health care,” she said.
Health care, Suleyman said, “is incredibly valuable, it is incredibly broken and there is a massive opportunity to transform it with AI at some point in the future.” But not today. Suleyman describes DeepMind as “getting in early” with products like Streams, so that it is well-positioned to use AI later.
DeepMind is “years away” from generating reliable revenue from health care, Suleyman said. The company earned just 40 million pounds of revenue in 2016—none of it from health work—and reported a loss of 94 million pounds, according to accounts filed with UK business registry Companies House. As DeepMind is learning, changing the way people experience health care—and turning a profit—makes beating the world’s best Go players look easy.
Cosmonaut says space station bacteria ‘come from outer space’
The bacteria turned up after swabbing of the space station’s exterior. The question is, how did they get there?
A Russian cosmonaut claims to have caught aliens. Cosmonaut Anton Shkaplerov says he found bacteria clinging to the external surface of the International Space Station that didn’t come from the surface of Earth.
Shkaplerov told the Russian news agency Tass that cosmonauts collected the bacteria by swabbing the outside of the space station during space walks years ago.
“And now it turns out that somehow these swabs reveal bacteria that were absent during the launch of the ISS module,” Shkaplerov told Tass. “That is, they have come from outer space and settled along the external surface. They are being studied so far and it seems that they pose no danger.”
The cosmonaut is preparing for his third trip to the space station next month. The collection of life forms from the outside of the ISS during one of his previous trips was something of a mini controversy a few years back. Russian scientists reported that spacewalk sample harvests yielded evidence of apparent sea plankton clinging to the station. The claims caught NASA by surprise at the time, which said it had heard nothing from the Russians about any space plankton.
STEPHEN HAWKING TO UNLOCK SECRETS OF BIG BANG AND BLACK HOLES WITH SUPERCOMPUTER
Stephen Hawking hopes to reveal secrets of black holes and better understand the origin of the universe with a new supercomputer capable of trawling through 14 billion years worth of data.
The eminent physicist’s Centre for Theoretical Cosmology (COSMOS) announced a partnership with Hewlett Packard Enterprise (HPE) that will leverage computing power to search for clues about the Big Bang hiding in massive data sets.
“The influx of new data about the most extreme events in our universe has led to dramatic progress in cosmology and relativity,” said Professor Paul Shellard, head of the COSMOS group, in an emailed statement to Newsweek.
“In a fast-moving field we have the two-fold challenge of analyzing larger data sets while matching their increasing precision with our theoretical models. In-memory computing allows is to ingest all of this data and act on it immediately… [equipping] us with a powerful tool to probe the big questions about the origin of the universe.”
Professor Stephen Hawking onstage during an event at One World Observatory on April 12, 2016 in New York City. Hawking wants to map the universe in order to better understand its originJEMAL COUNTESS/GETTY IMAGES
Hawking founded the COSMOS supercomputer facility in 1997 in order to support research in particle physics, astrophysics and cosmology. The Cambridge professor has previously stated his ambition of using vast computing power to find an “ultimate theory,” which in principle would enable scientists to predict “everything in the universe.”
Part of reaching this ultimate theory involves creating a detailed 3D map of the early universe, plotting the positions of galaxies, black holes and supernovas.
“Our COSMOS group is working to understand how space and time work, from before the first trillion trillionth of a second after the Big Bang up to today,” said Professor Hawking.
“The recent discovery of gravitational waves offers amazing insights about black holes and the whole Universe. With exciting new data like this, we need flexible and powerful computer systems to keep ahead so we can test our theories and explore new concepts in fundamental physics.”
Professor Stephen Hawking in his officeSARAH LEE/EYEVINE
The HPE Superdome Flex in-memory computing platform will be used in tandem with an HPE Apollo supercomputer to enable COSMOS to address cosmological theory with data from the known universe. It will also draw from data from new sources, such as gravitational waves.
“The in-memory computing capability of HPE Superdome Flex is uniquely suited to meet the needs of the COSMOS research group,” said Randy Meyer, a vice president at HPE. “The platform will enable the research team to analyze huge data sets and in real time. This means they will be able to find answers faster.”
‘Evolutionary dead end’: Extinct ‘stilt’ horse named for Canadian
This illustration depicts a family of stilt-legged horses (Haringtonhippus francisci) in Yukon, Canada, during the last ice age. (EurekAlert / Jorge Blanco)
A newly-discovered branch of the horse family has been named after the Canadian who first studied its remains in the Yukon, where it lived until the end of the last ice age.
Close study of the North American stilt-legged horse has revealed that the ice age-era mammal was an “evolutionary dead end” in the horse family, which developed through the Equus genus to spawn modern-day horses, asses and zebras. The taller, thinner stilt-legged horse lived up until approximately 17,000 years ago and died out entirely after the last ice age, according to the new study published in the journal eLife.
The study authors have officially classified the stilt-legged horse as a separate genus from the Equus, based on differences observed at the DNA level. The stilt-legged horse was first described in the 1970s by Canadian paleontologist Richard Harington, but was thought at the time to be related to the Asiatic wild ass or onager.
This image shows an example femur of H. francisci from Gypsum Cave, Nevada. (eLife)
Two crania assigned the Haringtonhippus francisci are shown. (eLife)
The new genus has been dubbed Haringtonhippus francisci, after Harington. Harington did not work on the new study, but the study’s authors say they named the new genus after him as a tribute to his groundbreaking work on the ancient animal.
“I am delighted to have this new genus named after me,” Harington, emeritus curator of quaternary paleontology at the Canadian Museum of Nature, said in a news release from the study authors.
Co-author Grant Zazula said the discovery would not have been possible without Harington’s “life-long dedication” to studying the stilt-legged horse in Canada’s North.
“There is no other scientist who has had greater impact in the field of ice age paleontology in Canada than Dick,” Zazula, a paleontologist with the Yukon government, said in the news release.
A connection that goes way, way back
The discovery is expected to shake up long-held theories that horse evolution was fairly straightforward, by demonstrating that a divergent branch of the family tree emerged some 4-6 million years ago before dying out.
“The horse family, thanks to its rich and deep fossil record, has been a model system for understanding and teaching evolution,” first study author Peter Heintzman, of UC Santa Cruz, said in the news release. “Now, ancient DNA has rewritten the evolutionary history of this iconic group.”
The study authors say the Equus and Haringtonhippus genuses thrived alongside one another in North America, although they did not interbreed. They co-existed with such large ice-age mammals as the woolly mammoth and the sabre-toothed cat, which also died out when the glaciers receded. The North American Equus and Haringtonhippus died out around the same time, but the Equus survived though a number of ancient horses that remained in Eurasia.
The stilt-legged horse discovery was made based on DNA taken from fossils in the Yukon’s Klondike gold fields, as well as from Natural Trap Cave in Wyoming and Gypsum Cave in Nevada.
Facebook is using AI to spot users with suicidal thoughts and send them help
The tool was tested earlier this year and is now rolling out to more countries
Facebook is using artificial intelligence to scan users’ posts for signs they’re having suicidal thoughts. When it finds someone that could be in danger, the company flags the post to human moderators who respond by sending the user resources on mental health, or, in more urgent cases, contacting first-responders who can try to find the individual.
The social network has been testing the tool for months in the US, but is now rolling out the program to other countries. The tool won’t be active in any European Union nations, where data protection laws prevent companies from profiling users in this way.
In a Facebook post, company CEO Mark Zuckerberg said he hoped the tool would remind people that AI is “helping save peoples’ lives today.” He added that in the last month alone, the software had helped Facebook flag cases to first responders more than 100 times. “If we can use AI to help people be there for their family and friends, that’s an important and positive step forward,” wrote Zuckerberg.
Despite this emphasis on the power of AI, Facebook isn’t providing many details on how the tool actually judges who is in danger. The company says the program has been trained on posts and messages flagged by human users in the past, and looks for telltale signs, like comments asking users “are you ok?” or “can I help?” The technology also examines live streams, identifying parts of a video that have more than the usual number of comments, reactions, or user reports. It’s the human moderators that will do the crucial work of assessing each case the AI flags and responding.
Although this human element should not be overlooked, research suggests AI can be a useful tool in identifying mental health problems. One recent study used machine learning to predict whether or not individuals would attempt suicide within the next two years with an 80 to 90 percent accuracy. However, the research only examined data from people who had been admitted to a hospital after self-harming, and wide-scale studies on individuals more representative of the general population are yet to be published.
Some may also be worried about the privacy implications of Facebook — a company that has previously worked with surveillance agencies like the NSA — examining user data to make such sensitive judgements. The company’s chief security officer Alex Stamos addressed these concerns on Twitter, saying that the “creepy/scary/malicious use of AI will be a risk forever,” which was why it was important to weigh “data use versus utility.”
However, TechCrunch writer Josh Constine noted that he’d asked Facebook how the company would prevent the misuse of this AI system and was given no response. We’ve reached out to the company to find out more information.
eBay now lets you start shopping with a Google Assistant smart speaker and finish on your phone
Today, eBay released an updated version of its Google Assistant app that lets users start a conversation about an item they want to buy on a smart speaker and then continue the shopping experience on an Android or iOS smartphone.
Say: “Hey Google, ask eBay to find me a — ,” followed by the name of virtually any item, and the eBay voice app will ask questions to narrow your search before pulling up results. Once you find an item that’s to your liking, the app will ask if it can send the results to your phone.
The eBay app is one of the first Google Assistant voice apps to take advantage of the new feature Google calls multi-surface switching. Earlier this month, Google rolled out a series of changes — including multi-surface switching — to better equip developers with tools to make voice apps for the millions of devices that can speak with Google Assistant.
Today’s news was announced in a blog post by eBay product manager Jay Vasudevan.
Like every other experience driven by conversational AI, the eBay voice app still makes mistakes, but it also has the power to drill down well beyond your initial query. For example, before delivering an answer to a search for wireless headphones, the eBay app may ask if you want headphones from Apple, Sony, or another brand, whether you want in-ear or over-the-ear headphones, and which color you prefer.
Results sent to your phone aren’t sent to the eBay app or a web browser but directly to Google Assistant, meaning they will appear anytime you scroll up to see previous Google Assistant interactions or visit the My Activity portion of the Google Assistant app.
The multi-surface shopping experience with Google Assistant appears to be limited to Google’s voice apps today, but such a feature could one day be used for shopping through other visual interfaces, like Chromecast or somedaytablets using the Android operating system. Multi-surface shopping could also change Google Express, which is Google’s answer to Amazon’s massive marketplace.
Google Assistant can make Google Express shopping recommendations today but only with a Home smart speaker.
Say “OK Google, order towels” to a Google Home smart speaker today and you may receive recommendations from Walmart and Bed, Bath, and Beyond. Tell Pixel Buds or an Android smartphone with Google Assistant to “order paper towels” and, ironically, you’ll get a web search with Amazon.com as the top result.
Since Google Express shopping with a Google Home smart speaker became available earlier this year, Google Express has come to include some of the largest retailers in the United States, including Home Depot, Target, and Costco. Powering discovery via natural language for retail giants sounds like a daunting, but potentially lucrative, challenge.
This is by no means eBay’s first venture into conversational commerce. An eBay Facebook Messenger bot named ShopBot made its debut in October 2016, and the eBay voice app for Google Assistant made its debut in July 2017.
Updated 12:51 p.m. to include an example of a Google Express search result and the screenshot of an “order paper towels” query with Google Assistant from a Pixel 2 smartphone.
In today’s guest post, Bruce Tulloch, CEO and Managing Director of BitScope Designs, discusses the uses of cluster computing with the Raspberry Pi, and the recent pilot of the Los Alamos National Laboratory 3000-Pi cluster built with the BitScope Blade.
High-performance computing and Raspberry Pi are not normally uttered in the same breath, but Los Alamos National Laboratory is building a Raspberry Pi cluster with 3000 cores as a pilot before scaling up to 40 000 cores or more next year.
The short answer to this question is: the Raspberry Pi cluster enables Los Alamos National Laboratory (LANL) to conduct exascale computing R&D.
The Pi cluster breadboard
Exascale refers to computing systems at least 50 times faster than the most powerful supercomputers in use today. The problem faced by LANL and similar labs building these things is one of scale. To get the required performance, you need a lot of nodes, and to make it work, you need a lot of R&D.
However, there’s a catch-22: how do you write the operating systems, networks stacks, launch and boot systems for such large computers without having one on which to test it all? Use an existing supercomputer? No — the existing large clusters are fully booked 24/7 doing science, they cost millions of dollars per year to run, and they may not have the architecture you need for your next-generation machine anyway. Older machines retired from science may be available, but at this scale they cost far too much to use and are usually very hard to maintain.
The Los Alamos solution? Build a “model supercomputer” with Raspberry Pi!
Think of it as a “cluster development breadboard”.
The idea is to design, develop, debug, and test new network architectures and systems software on the “breadboard”, but at a scale equivalent to the production machines you’re currently building. Raspberry Pi may be a small computer, but it can run most of the system software stacks that production machines use, and the ratios of its CPU speed, local memory, and network bandwidth scale proportionately to the big machines, much like an architect’s model does when building a new house. To learn more about the project, see the news conferenceand this interview with insideHPC at SC17.
Traditional Raspberry Pi clusters
Like most people, we love a good cluster! People have been building them with Raspberry Pi since the beginning, because it’s inexpensive, educational, and fun. They’ve been built with the original Pi, Pi 2, Pi 3, and even the Pi Zero, but none of these clusters have proven to be particularly practical.
That’s not stopped them being useful though! I saw quite a few Raspberry Pi clusters at the conference last week.
One tiny one that caught my eye was from the people at openio.io, who used a small Raspberry Pi Zero W cluster to demonstrate their scalable software-defined object storage platform, which on big machines is used to manage petabytes of data, but which is so lightweight that it runs just fine on this:
There was another appealing example at the ARM booth, where the Berkeley Labs’ singularity container platform was demonstrated running very effectively on a small cluster built with Raspberry Pi 3s.
My show favourite was from the Edinburgh Parallel Computing Center (EPCC): Nick Brown used a cluster of Pi 3s to explain supercomputers to kids with an engaging interactive application. The idea was that visitors to the stand design an aircraft wing, simulate it across the cluster, and work out whether an aircraft that uses the new wing could fly from Edinburgh to New York on a full tank of fuel. Mine made it, fortunately!
Next-generation Raspberry Pi clusters
We’ve been building small-scale industrial-strength Raspberry Pi clusters for a while now with BitScope Blade.
When Los Alamos National Laboratory approached us via HPC provider SICORP with a request to build a cluster comprising many thousands of nodes, we considered all the options very carefully. It needed to be dense, reliable, low-power, and easy to configure and to build. It did not need to “do science”, but it did need to work in almost every other way as a full-scale HPC cluster would.
Some people argue Compute Module 3 is the ideal cluster building block. It’s very small and just as powerful as Raspberry Pi 3, so one could, in theory, pack a lot of them into a very small space. However, there are very good reasons no one has ever successfully done this. For a start, you need to build your own network fabric and I/O, and cooling the CM3s, especially when densely packed in a cluster, is tricky given their tiny size. There’s very little room for heatsinks, and the tiny PCBs dissipate very little excess heat.
Instead, we saw the potential for Raspberry Pi 3 itself to be used to build “industrial-strength clusters” with BitScope Blade. It works best when the Pis are properly mounted, powered reliably, and cooled effectively. It’s important to avoid using micro SD cards and to connect the nodes using wired networks. It has the added benefit of coming with lots of “free” USB I/O, and the Pi 3 PCB, when mounted with the correct air-flow, is a remarkably good heatsink.
When Gordon announced netboot support, we became convinced the Raspberry Pi 3 was the ideal candidate when used with standard switches. We’d been making smaller clusters for a while, but netboot made larger ones practical. Assembling them all into compact units that fit into existing racks with multiple 10 Gb uplinks is the solution that meets LANL’s needs. This is a 60-node cluster pack with a pair of managed switches by Ubiquiti in testing in the BitScope Lab:
Two of these packs, built with Blade Quattro, and one smaller one comprising 30 nodes, built with Blade Duo, are the components of the Cluster Module we exhibited at the show. Five of these modules are going into Los Alamos National Laboratory for their pilot as I write this.
It’s not only research clusters like this for which Raspberry Pi is well suited. You can build very reliable local cloud computing and data centre solutions for research, education, and even some industrial applications. You’re not going to get much heavy-duty science, big data analytics, AI, or serious number crunching done on one of these, but it is quite amazing to see just how useful Raspberry Pi clusters can be for other purposes, whether it’s software-defined networks, lightweight MaaS, SaaS, PaaS, or FaaS solutions, distributed storage, edge computing, industrial IoT, and of course, education in all things cluster and parallel computing. For one live example, check out Mythic Beasts’ educational compute cloud, built with Raspberry Pi 3.
For more information about Raspberry Pi clusters, drop by BitScope Clusters.
I’ll read and respond to your thoughts in the comments below this post too.