Last year, the Defense Advanced Research Projects Agency (DARPA), which funds a range of blue-sky research efforts relevant to the US military, launched a $1.5 billion, five-year program known as the Electronics Resurgence Initiative (ERI) to support work on advances in chip technology. The agency has just unveiled the first set of research teams selected to explore unproven but potentially powerful approaches that could revolutionize US chip development and manufacturing.
Hardware innovation has taken something of a back seat to software advances in recent years, and that bothers the US military for several reasons.
End of an era
At the top of the list is that Moore’s Law, which holds that the number of transistors fitted on a chip doubles roughly every two years, is reaching its limits (see “Moore’s Law is dead. Now what?”). That could stymie future advances in electronics that the military relies on, unless new architectures and designs can allow progress in chip performance to continue.
There are also worries about the rising cost of designing integrated circuits, and about increased foreign—for which read “Chinese”—investment in semiconductor design and manufacturing (see “China wants to make the chips that will add AI to any gadget”).
The ERI’s budget represents around a fourfold increase in DARPA’s typical annual spending on hardware. Initial projects reflect the initiative’s three broad areas of focus: chip design, architecture, and materials and integration.
One project aims to radically reduce the time it takes to create a new chip design, from years or months to just a day, by automating the process with machine learning and other tools so that even relatively inexperienced users can create high-quality designs.
“No one yet knows how to get a new chip design completed in 24 hours safely without human intervention,” says Andrew Kahng of the University of California, San Diego, who’s leading one of the teams involved. “This is a fundamentally new approach we’re developing.”
“We’re trying to engineer the craft brewing revolution in electronics,” says William Chappell, the head of the DARPA office that manages the ERI program. The agency hopes that the automated design tools will inspire smaller companies without the resources of giant chip makers, just as specialized brewers in the US have innovated alongside the beer industry’s giants.
New chip materials and clever designs
If we’re going to move beyond Moore’s Law, though, the chances are that radically new materials, and new ways of integrating computing power and memory, will be needed. Shifting data between memory components that store it and processors that act on it sucks up energy and creates one of the biggest hurdles to boosting processing power.
Another ERI project will explore ways in which novel circuit integration schemes can eliminate, or at least greatly reduce, the need to shift data around. The ultimate goal is to effectively embed computing power in memory, which could lead to dramatic increases in performance.
On the chip architecture front, DARPA wants to create hardware and software that can be reconfigured in real time to handle more general tasks or specialized ones such as specific artificial-intelligence applications. Today, multiple chips are needed, driving up complexity and cost.
Some of DARPA’s efforts overlap with areas already being worked on extensively in industry. An example is a project to develop 3-D system-on-chip technology, which aims to extend Moore’s Law by using new materials such as carbon nanotubes, and smarter ways of stacking and partitioning electronic circuits. Chappell acknowledges the overlap, but he says the agency’s own work is “probably the biggest effort to make [the approach] real.”
Not nearly enough
Some think DARPA and other arms of the US government that support electronics research, such as the Department of Energy, should be spending even more to spur innovation.
Erica Fuchs, a professor at Carnegie Mellon University who’s an expert in public policy related to emerging technologies, says that as chip development has focused on more specific applications, big companies have lost their appetite for spending money on collaborative research efforts just as Moore’s Law is faltering.
Fuchs praises the ERI but thinks the US government’s overall approach to supporting electronics innovation is “easily an order of magnitude below” what’s needed to address the challenges we’re facing. Let’s hope the grassroots chip design movement that DARPA is trying to foment will go some way toward closing the gap.
Holographic duality between a graphene flake and a black hole
Much research on black holes is theoretical since it is difficult to make actual measurements on real black holes. Such experiments also need to be undertaken over decades or longer. Physicists are therefore keen to create laboratory systems that are analogous to these cosmic entities. New theoretical calculations by a team in Canada, the US, UK and Israel have now revealed that a material as simple as a graphene flake with an irregular boundary subjected to an intense external magnetic field can be used to create a quantum hologram that faithfully reproduces some of the signature characteristics of a black hole. This is because the electrons in the carbon material behave according to the Sachdev-Ye-Kitaev model.
Some of the most important unresolved mysteries in modern physics come from the “incompatibility” between Einstein’s theory of general relativity and the theory of quantum mechanics. General relativity describes the physics of the very big (the force of gravity and all that it affects: spacetime, planets, galaxies and the expansion of the Universe). The theory of quantum mechanics is the physics of the very small – and the other three forces, electromagnetism and the two nuclear forces.
“In recent years, physicists have gleaned important new insights into these questions through the study of the SYK model,” explains Marcel Franz of the University of British Columbia in Canada, who led this research effort. “This model is an illustration of a type of ‘holographic duality’ in which a lower-dimensional system can be represented by a higher dimensional one. In our calculations, the former is N graphene electrons in (0+1) dimensions and the latter the dilation gravity of a black hole in (1+1) dimensional anti-de Sitter (AdS2) space.
Remarkably, this model accurately describes the physical characteristics of black holes for large values of N (larger than 100 ideally). These characteristics include non-zero residual entropy, and fast scrambling of quantum information at the black hole singularity (the region beyond which not even light can escape the tug of its gravity).
Irregular boundary
Franz and colleagues’ simple experimental realization of the SYK model involves electrons in a graphene flake (a sheet of carbon just one atom thick) that has an irregular boundary. It must be irregular in order to imprint randomness onto the electrons, Franz tells Physics World. “We need this because random structure of electron-electron interactions is the essential requirement of the SYK model.
“Unlike earlier solid-state system proposals to demonstrate this model, our device does not require advanced fabrication techniques and should therefore by relatively straightforward to assemble using existing technologies.”
When a magnetic field B is applied to graphene a number of interesting quantum phases are produced. At a simple (“non-interacting”) level, the field simply reorganizes the single-particle electron states in the material into Dirac Landau levels with certain energies. When the graphene flake is sufficiently small (ideally around 100-200 nm in size), the electrons in the n=0 Landau level are described by the SYK model thanks to the so-called Aharonov-Casher construction.
The team, which includes researchers from Tel-Aviv University in Israel, the Rudolf Peierls Centre for Theoretical Physics in Oxford in the UK, and Microsoft Research in Santa Barbara in the US, says that it is now busy trying to better understand the behaviour of graphene electrons in the SYK regime. “We hope that our theory results will motivate experimentalists to study graphene flakes of the type required to produce SYK physics,” says Franz.
If the name Harley Davidson conjures up images in your head of leather-clad biker gangs flying down a desert highway while spewing exhaust from their loud chrome pipes, then perhaps you’d better think again. As we reported earlier this summer, Harley Davidson is well on their way to debuting their first electric motorcycle which they are hoping will target a younger, more urban audience – a market with which the brand desperately needs to succeed.
And now the company appears to be doubling down on that commitment by announcing a new lineup of electric motorcycles while simultaneously fleshing out their EV team with a string of new positions.
Harley Davidson’s electric motorcycle development has been known by the name Project LiveWire, though the company trademarked the name Revelation in connection with the project earlier this year, making it a good bet that the new electric Harley will be known as the Revelation upon its final debut.
The project has been in progress for at least 4 years, and has a projected release sometime next year.
Recently, the Milwaukee based motorcycle company quietly added multiple open positions related to electric vehicle operations on the hiring page of their website, fueling speculation that they are preparing to expand the scope of their current EV project.
Now it appears that the speculation has been confirmed as Harley Davidson announced today multiple new electric motorcycle platforms that are in development over the next few years.
According to Harley Davidson’s Chief Operating Officer Michelle Kumbier:
“We’re going big in EV with a family of products that will range in size, power, as well as price. When you look at EV you know this is a whole new customer base that we are bringing in.”
In addition to Harley Davidson’s 2019 halo model, the LiveWire, two smaller and more affordable electric motorcycles are slated for 2021 and 2022.
Kumbier described these models as “more middleweight, if you will” and designed for “accessible power”.
Harley Davidson also plans to cover the lightweight electric motorcycle market as well, offering three different EVs designed to cater to the same market currently dominated by electric scooters, mopeds and higher end electric bicycles.
These three vehicles include a utility scooter, something that looks more or less like a dirt bike, and perhaps most surprisingly for the bar-and-shield motorcycle company, an electric-assist bicycle.
The development of these five new EV platforms won’t be cheap for Harley Davidson, as the company indicated with their investment projections. They plan to spend around $150-$180M on electric vehicle development through 2022, which will amount to around of one-third of their total operating investments.
The other two-thirds are made up of investments in their middle/small displacement gas-motorcycles and shifting their retail footprint towards smaller retail locations in urban settings as opposed to their traditionally larger suburban dealerships.
Exact specifications for Harley Davidson’s new electric motorcycles and scooters and bikes have not yet been released. However, the company has indicated that they are shifting focus towards a new market including younger and more urban riders, which could imply a somewhat lower range.
The original LiveWire concept bike had an electric range of just 60 miles. That would put the new Harley electric motorcycle well below some other offerings from Zero Motorcycles, which could be Harley Davidson’s largest competitor in the electric vehicle market.
Electrek’s Take
The fact that Harley Davidson is going all-in on electric vehicles is quite encouraging, if not perhaps somewhat predictable. The company has known that the writing was on the wall for years now regarding their aging gas bike ridership, and CEO Matt Levatich has not been shy about admitting that the company plans to aggressively shift its mindset towards a new generation of younger, urban riders.
Without adapting, the company simply can’t succeed in the long-term.
But I must say I’m pleasantly surprised at the wide scope of Harley Davidson’s plans over the next 4 years. While it was great to see the company making progress on just one e-moto, the LiveWire, I’m definitely excited to see that they are sufficiently committed to EV development to already be in the planning stages for a wide range of two-wheeled electric vehicles.
By adding scooters and e-bikes to their lineup, the company is definitely shifting away from their rough, hardcore image of the past. If they can reinvent themselves as a powerhouse of American made, two-wheeled EVs that are both sensible and at least moderately affordable, they just may stand a chance at succeeding in what is quickly becoming a crowded EV market.
Google Pixel 3 XL Clearly White surfaces, shows off notch
Confirming rumours from the past few weeks, Google is certainly preparing itself to release its next lineup of Pixel smartphones. A new round of images has surfaced that showcases the Pixel 3 XL, which is a follow-up to the Pixel 2XL from last year. According to a thread from XDA Developers, Google will bring back the Clearly White panda-coloured Pixel this year and also, as expected, come with the notch where it will house the front-facing camera.
This unit seems to be an engineering unit used by Google — you can tell as it’s the same internal logo and barcodes they use. If accurate, this leak lines up with others confirms the hardware design. In addition, this unit shows the Pixel 3 XL sporting 4GB of RAM and 64GB of storage. The Pixel 3 will feature a 5.3-inch display with no screen cut out, while the Pixel 3 XL will feature a 6.2-inch display.
The new Pixel devices are expected to be announced in October. Source: XDA Developers Via: AndroidAuthority Facebook Twitter Google+ Linkedin Reddit Featured Articles Telus launches SmartHome Security and Secure Business, replaces AlarmForce in western Canada Toronto Transit Commission is enabling two-hour transfers for Presto users 2018 iPad Pro will be smaller, won’t feature a headphone jack: report Sign-up for MobileSyrup news sent straight to your inbox
The Pixel 3 will feature a 5.3-inch display with no screen cut out, while the Pixel 3 XL will feature a 6.2-inch display. The new Pixel devices are expected to be announced in October.
Google Pixel 3 and Pixel 3 XL renders and leaked images REVEALED
There have already been a number of concept images that show how this new design might look but now an accessory manufacturer has given a more official look at what could be in store.
Gizmochina has managed to see images of a set of CAD drawings detailing how the phone will look inside a case.
Although the most of the rear is obscured, there is a glimpse of the front where you can make out the larger screen with its edge-to-edge appearance and notch.
The image also confirms other rumours that the Pixel 3 and 3 XL will stick with a single rear camera rather than opting for a dual-lens snapper.
Although Google seems to be going for a single camera on the back, there could be the inclusion of a double front-facing system which may add some DSLR-style depth of field to selfies.
Google Assistant uses AI to ‘get more thing done over the phone’
Two final features revealed in the image is the space for a stereo speaker at the base of the phone and a rear-mounted fingerprint scanner in the middle of the device.
There’s no word if these images show a final design but they seem to tally with many other leaks and rumours which have appeared online over the past few months.
Luckily, it doesn’t seem like there is too long to wait as Google usually unveils its Pixel phones at the beginning of October and the 3 and 3 XL are expected to continue this tradition.
VentureBeat reporter and notable leaker Evan Blass says a “second-generation Pixelbook” will be announced at a hardware event hosted by Google later this year.
Citing a “reliable source” he stated the laptop hybrid will appear with severely reduced bezels and will ship before the end of the year.
Blass is also confident that smartwatch is on the way with a recent tweet saying: “Besides the Pixel 3, Pixel 3 XL, and second-gen Pixel Buds, a reliable source tells me — with high confidence — that Google’s fall hardware event will also introduce a Pixel-branded watch. Have a great summer!”
Hands-off healing: urologist Greg Shaw with the £1.5m machine which helps the UCH team do 600 prostate operations a year. Photograph: Jude Edginton for the Observer
Like all everyday miracles of technology, the longer you watch a robot perform surgery on a human being, the more it begins to look like an inevitable natural wonder.
Earlier this month I was in an operating theatre at University College Hospital in central London watching a 59-year-old man from Potters Bar having his cancerous prostate gland removed by the four dexterous metal arms of an American-made machine, in what is likely a glimpse of the future of most surgical procedures.
The robot was being controlled by Greg Shaw, a consultant urologist and surgeon sitting in the far corner of the room with his head under the black hood of a 3D monitor, like a Victorian wedding photographer. Shaw was directing the arms of the remote surgical tool with a fluid mixture of joystick control and foot-pedal pressure and amplified instruction to his theatre team standing at the patient’s side. The surgeon, 43, has performed 500 such procedures, which are particularly useful for pelvic operations; those, he says, in which you are otherwise “looking down a deep, dark hole with a flashlight”.
The first part of the process has been to “dock the cart on to the human”. After that, three surgical tools and a video camera, each on the end of a 30cm probe, have been inserted through small incisions in the patient’s abdomen. Over the course of an hour or more Shaw then talks me through his actions.
“I’m just going to clip his vas deferens now,” he says, and I involuntarily wince a little as a tiny robot pincer hand, magnified 10 times on screens around the operating theatre, comes into view to permanently cut off sperm supply. “Now I’m trying to find that sweet spot where the bladder joins the prostate,” Shaw says, as a blunt probe gently strokes aside blood vessels and finds its way across the surface of the plump organ on the screen, with very human delicacy.
After that, a mesmerising rhythm develops of clip and cauterise and cut as the velociraptor pairing of “monopolar curved scissors” and “fenestrated bipolar forceps” is worked in tandem – the surprisingly exaggerated movements of Shaw’s hands and arms separating and sealing tiny blood vessels and crimson connective tissue deep within the patient’s pelvis 10ft away. In this fashion, slowly, the opaque walnut of the prostate emerges on screen through tiny plumes of smoke from the cauterising process.
This operation is part of a clinical trial of a procedure pioneered in German hospitals that aims to preserve the fine architecture of microscopic nerves around the prostate – and with them the patient’s sexual function. With the patient still under anaesthetic, the prostate, bagged up internally and removed, will be frozen and couriered to a lab at the main hospital site a mile away to determine if cancer exists at its edges. If it does, it may be necessary for Shaw to cut away some of these critical nerves to make sure all trace of malignancy is removed. If no cancer is found at the prostate’s margins the nerves can be saved. While the prostate is dispatched across town, Shaw uses a minuscule fish hook on a robot arm to deftly sew bladder to urethra.
‘The technique itself feels like driving and the 3D vision is very immersive’: Greg Shaw controls
the robot as it operates on a patient Photograph: Jude Edginton for the Observer
The Da Vinci robot that Shaw is using for this operation, made by the American firm Intuitive Surgical, is about as “cutting edge” as robotic health currently gets. The £1.5m machine enables the UCH team to do 600 prostate operations a year, a four-fold increase on previous, less precise, manual laparoscopic techniques.
Mostly, Shaw does three operations one or two days a week, but there have been times, with colleagues absent, when he has done five or six days straight. “If you tried to do that with old-fashioned pelvic surgery, craning over the patient, you would be really hurting, your shoulders and your back would seize up,” he says.
There are other collateral advantages of the technology. It lends itself to accelerated and effective training both because it retains a 3D film of all the operations conducted, and enables a virtual-reality suite to be plugged in – like learning to fly a plane using a simulator. The most important benefit however is the greater safety and fewer complications the robot delivers.
I wonder if it changes the psychological relationship between surgeon and patient, that palpable intimacy.
Shaw does not believe so. “The technique itself feels like driving,” he says. “But that 3D vision is very immersive. You are getting lots of information and very little distraction and you are seeing inside the patient from 2cm away.”
There are, he says, still diehards doing prostatectomies as open surgery, but he finds it hard to believe that their patients are fully informed about the alternatives. “Most people come in these days asking for the robot.”
If a report published this month on the future of the NHS is anything to go by, it is likely that “asking for the robot” could increasingly be the norm in hospitals. The interim findings of the Institute for Public Policy Research’s long-term inquiry into the future of health – led by Lord Darzi, the distinguished surgeon and former minister in Gordon Brown’s government – projected that many functions traditionally performed by doctors and nurses could be supplanted by technology.
“Bedside robots,” the report suggested, may soon be employed to help feed patients and move them between wards, while “rehabilitation robots” would assist with physiotherapy after surgery. The centuries-old hands-on relationship between doctor and patient would inevitably change. “Telemedicine” would monitor vital signs and chronic conditions remotely; online consultations would be routine, and someone arriving at A&E “may begin by undergoing digital triage in an automated assessment suite”.
Even the consultant’s accumulated wisdom will be superseded. Machine-learning algorithms fed with “big data” would soon be employed to “make more accurate diagnoses of diseases such as pneumonia, breast and skin cancers, eye diseases and heart conditions”. By embracing a process to achieve “full automation” Lord Darzi’s report projects that £12.5bn a year worth of NHS staff time (£250m a week) would be saved “for them to spend interacting with patients” – a belief that sounds like it would be best written on the side of a bus.
While some of these projections may sound far more than the imagined decade away, others are already a reality. Increasingly, the data from sensors and implants measuring blood sugars and heart rhythms is collected and fed directly to remote monitors; in London, the controversial pilot scheme GP@Hand has seen more than 40,000 people take the first steps toward a “digital health interface” by signing up for online consultations accessed through an app – and in the process, de-registering from their bricks-and-mortar GP surgery. Meanwhile, at the sharpest end of healthcare – in the operating theatre – robotic systems like the one used by Greg Shaw are already proving the report’s prediction that machines will carry out surgeries with greater dexterity than humans. As a pioneer of robotic surgical techniques, Lord Darzi knows this better than most.
In a way, it is surprising that it has taken so long to reach this point. Hands-off surgery was first developed by the US military at the end of the last century. In the 1990s the Pentagon wanted to explore ways in which operations at M*A*S*H-style field hospitals might be performed by robots controlled by surgeons at a safe distance from the battlefield. Their investment in Intuitive Surgical and its Da Vinci prototype has given the Californian company – valued at $62bn – a virtual monopoly, fiercely guarded, with 4,000 robots now operating around the world.
Jaime Wong MD is the consultant lead on the R&D programme at Intuitive Surgical. He is also a urologist who has been using a Da Vinci robot for more than a decade and watched it evolve from original 2D displays that involved more spatial guesswork, to the current far more manoeuvrable and all-seeing version.
Wong still enjoys seeing traditional open surgeons witnessing a robotic operation for the first time and “watching the amazement on their faces at all the things they did not quite realise are located in that area”.
In the next stage of development, he sees artificial intelligence (AI) and machine learning playing a significant role in the techniques. “Surgery is becoming digitised, from imaging to movement to sensors,” he says, “and everything is translating into data. The systems have a tremendous amount of computational power and we have been looking at segmenting procedures. We believe, for example, we can use these processes to reduce or eliminate inadvertent injuries.”
Up until recently, Da Vinci, having stolen a march on any competition, has had this field virtually to itself. In the coming year, that is about to change. Google has, inevitably, developed a competitor (with Johnson & Johnson) called Verb. The digital surgery platform – which promises to “combine the power of robotics, advanced instrumentation, enhanced visualisation, connectivity and data analytics” – aims to “democratise surgery” by bringing the proportion of robot-assisted surgeries from the current 5% up to 75%. In Britain, meanwhile, a 200-strong company called Cambridge Medical Robotics is close to approval for its pioneering system, Versius, which it hopes to launch this year.
Advertisement
Wong says he welcomes the competition: “I tend to think it validates what we have been doing for two decades.”
The latest creators of robot surgeons see ways to move the technology into new areas. Martin Frost, CEO of the Cambridge company, tells me how the development of Versius has involved the input of hundreds of surgeons with different soft-tissue specialities, to create a portable and modular system that could operate not just in pelvic areas but in more inaccessible parts of the head, neck and chest.
“Every operating room in the world currently possesses one essential component, which is the surgeons’ arm and hand,” Frost says. “We have taken all of the advantages of that form to make something that is not only bio-mimicking but bio-enhancing.” The argument for the superiority of minimally invasive surgery is pretty much won, Frost suggests: “The robotic genie is out of the bottle.”
And what about that next stage – does Frost see a future in which AI-driven techniques are involved in the operation itself?
“We see it in small steps,” he says. “We think that it is possible, within a few years, that a robot may do part of certain procedures ‘itself’, but we are obviously a very long way from a machine doing diagnosis and cure, and there being no human involved.”
The other holy grail of telesurgery – the possibility of remote “battlefield” operations – is closer to being a reality. In a celebrated instance, Dr Jacques Marescaux, a surgeon in Manhattan, used a protected high-speed connection and remote controls to successfully remove the gallbladder of a patient 3,800 miles away in Strasbourg in 2001. Since then there have been isolated instances of other remote operations but no regular programme.
In 2011, the US military funded a five-year research project to determine how feasible such a programme might be with existing technology. It was led by Dr Roger Smith at the Nicholson Center for advanced surgery in Florida.
Smith explained to me how his study was primarily to determine two things: first, latency – the tiny time lag of high-speed connections over large distances – and second, how that lag interfered with a surgeon’s movements. His studies found that if the lag rose above 250 milliseconds “the surgeon begins to see or sense that something is not quite right”. But also that using existing data connections, between major cities, or at least between major hospital systems, “the latency was always well below what a human surgeon could perceive”.
The problem lay in the risk of unreliability of the connection. “We all live on the internet,” Smith says. “Most of the time your internet connection is fantastic. Just occasionally your data slows to a crawl. The issue is you don’t know when that will happen. If it occurs during a surgery you are in trouble.” No surgeon – or patient – would like to see a buffering symbol on their screen.
The ways around that would involve dedicated networks – five lines of connectivity with a performance level at least two times what you would ever need, Smith says, “so that the chances of having an issue were like one in a million”.
Those kinds of connections are available, but the lack of investment is more one of regulation and liability than cost. Who would bear the risk of connection failure? The state in which the surgeon was located, or that in which the patient was anaesthetised – or the countries through which the cable passed? As a result, Smith says: “In the civilian world, there are few situations where you would say this is a must-have thing.”
He envisages three possible champions of telesurgery: the military, “If you could, say, create a connection where the surgeon could be in Italy and the patient in Iraq”; medical missionaries, “Where surgeons in the developed world worked through robots in places without advanced surgeons”; and Nasa, “At a point where you have enough people in space that you need to set up a way to do surgery.” For the time being the technology is not robust enough for any of these three.
For Jaime Wong the risks are likely to remain too great. Intuitive Surgical is pursuing the concepts of “telementoring” or “teleproctoring” rather than telesurgery. “The local surgeon would be performing the surgery, while our monitor would be remote,” he suggests, “and a specialist mentor could be looking at different camera views, providing second opinions. It will be like ‘phone a friend’.”
True telesurgery, Roger Smith suggests, also begs a further question, one which we may yet hear in the coming decade or so. “Would you have an operation without a surgeon in the room?” For the time being, the answer remains a no-brainer.
Geckos Can Produce New Brain Cells And Regenerate Their Brains After Injuries
Scientists from the University of Guelph, in Canada, discovered a new type of stem cells in geckos which can help the lizards produce new brain cells and regenerate their brains after injuries. This discovery could help researchers develop new ways to regenerate human brain cells damaged or lost due to accidents or illnesses.
“The brain is a complex organ, and there are so few good treatments for brain injury, so this is a very exciting area of research. The findings indicate that gecko brains are constantly renewing brain cells, something that humans are notoriously bad at doing,” explained Professor Matthew Vickaryous from the Department of Biomedical Sciences within the Ontario Veterinary College (OVC).
This recent research, issued in the Scientific Reports journal, is the first in the world to unveil the formation of new neurons and the stem cells involvement in brain regeneration in geckos lizards.
New brain cells production and brain regeneration after injuries observed in geckos lizards
“Most regeneration research has looked at zebrafish or salamanders. Our work uses lizards, which are more closely related to mammals than either fish or amphibians,” asserted Rebecca McDonald, a master’s student at the University of Guelph and the study’s leading author.
The scientists injected geckos with a chemical label that binds to the DNA of the newly generated cells. The researchers noticed stem cells that frequently make new brain cells in the medial cortex, an area of the brain that’s responsible for the behavior and social cognition.
“The next step in this area of research is to determine why some species, like geckos, can replace brain cells while other species, like humans, cannot,” explained Rebecca McDonald.
According to the researchers, the next step of the research would be to see if the new brain cells generation and brain regeneration features in geckos lizards could be used in new therapies for brain injuries in humans.
Google is laying the groundwork for life beyond advertising
Google is an advertising company.
In spite of its dominance in mobile operating systems, productivity tools like Gmail, and forays into media with subscription services such as Google Play Music and YouTube TV, the company still makes nearly 90% of its money from advertisements and its advertising platform. Advertising is also the primary driver of revenue for its holding company, Alphabet, which has a large portfolio of other businesses including AI research lab DeepMind, autonomous driving company Waymo, life sciences company Verily, and investment groups GV and CapitalG.
Many of those other companies are long-term bets that aren’t expected to pay off in the near term—or maybe ever. There are grandiose objectives like DeepMind’s quest to “solve intelligence,” or Calico’s mission to cure death. But there are slightly more grounded ideas that are finally poised to be real contributors to Alphabet’s bottom line: Waymo and Google Cloud, its server space and AI services that can be rented based on usage.
Cloudy skies
Google doesn’t make it easy for investors to see exactly how much its cloud business actually makes, but earlier this year Cloud CEO Diane Greene mentioned in an interview that the service had crossed $1 billion in revenue per quarter.
Another barometer of the cloud’s success is Google’s “other revenue” line item, which the company uses to report revenue from everything that’s not advertising, such as services and hardware. Revenue from that segment is the company’s fastest growing, bringing in $4.4 billion in latest quarter, up 37% year over year, the company reported last week. Google Cloud also announced it added large new customers including Target, Carrefour, and PwC.
This revenue is dangerously close to surpassing Google’s ad network, its second-largest revenue stream, which brought in $4.8 billion last quarter on 14% year over year growth.
But Google also faces huge competition in the cloud space. Microsoft and Amazon have both reported substantially bigger revenue from the market, which is estimated to be more than $180 billion this year, according to Gartner. When asked about this during a conference call with investors earlier in the week, Google CEO Sundar Pichai shrugged off the implication that there’s not enough business to go around.
“It feels far from a zero-sum game,” Pichai said. “Businesses are going to embrace multiple clouds over time too. So, I think not only is this early, but I think it is going to transform. And there is a lot of opportunity here.”
One advantage Google holds over its competition is its deep bench of AI researchers, whose work is constantly being translated into cloud products. At Google Cloud Next, its annual developer conference held last week, the company announced that it was readying a product for call centers that would allow human-sounding “virtual agents” to handle simple calls, handing off more complex questions to humans. It also announced a suite of self-improving AI algorithms called AutoML; Quartz reported on the company’s research into this field before it was turned into a product.
Buried in the conference’s announcements was a new hardware play by Google—a smaller, mobile version of its Tensor Processing Unit used in Google datacenters. Google hasn’t released specifications on the mobile chip yet, but the company claims that the datacenter version offers up to 30x speedups for machine learning while reducing power consumption.
Getting a chip like this into Google’s hardware products, such as its Pixel phones or Google Home devices, could mean the devices would be able to use AI to understand and answer questions without contacting Google servers. That capability would be a boon for privacy—every query wouldn’t be stored on Google servers—but also improve the speed of the devices. Elsewhere, the chip could sustain Google’s dominance as the go-to framework for building artificial intelligence algorithms, extending AI into smart sensors for enterprise use.
Waymo money
Last week also gave us a clearer look at how Alphabet’s autonomous driving company, Waymo, could start generating revenue. The company announced an expansion of its pilot program in Arizona, and told Quartz that five partners will pay Waymo to integrate autonomous vehicles into their businesses. These partners aren’t local businesses, either: Walmart, Marriott-owned Element Hotels, Avis, AutoNation, and DDR Corp. It’s easy to see the value of Waymo’s cars to these businesses, as they’d provide a cheap ride to a store, hotel or rental car office. On the other side, Waymo gets to lock up some long term, potentially big dollar partnerships to support its business. Neither Waymo or the partners disclosed the amount of money changing hands in the deal, though the partnerships are expected to scale as Waymo enters more cities.
Alphabet is also making sizable investments into Waymo, including adding up to 62,500 autonomous minivans to its fleet from Fiat Chrysler. Back of the napkin math suggests that the minivans, which retail for about $40,000 each, could easily be a billion-dollar deal as they’re delivered over the next few years.
As a part of its “early rider” program, Waymo isn’t currently charging customers for the rides—but once the ride-hailing service launches in earnest, each of those 62,000 vehicles would be making money for the company alongside corporate partnerships. Waymo’s ride-hailing goals can be seen in its permit application; Quartz previously reported that the company had filed the paperwork to start a commercial venture in Arizona this year.
But besides “later this year” we still don’t have a concrete timeline for Waymo’s commercial operation at scale. And even though Waymo and Google Cloud are going to add to Google’s bottom line, it’s a long road to the more than $20 billion the company generated from ads last quarter.
First, the good news: You probably aren’t any worse off, financially, than previous generations who managed to retire with dignity. But even if you have decent savings, odds are you still aren’t properly prepared to support yourself in retirement. That’s the bad news.
In the past 40 years, retirement systems in developed countries have gone through a major transition, from defined benefit pensions where employers saved and invested on behalf of employees (and bore all the risks involved), to defined contribution plans where employees get money in their own accounts have to manage investments (and risk) on their own. Employers started ditching traditional pensions in the late 1970s after legislation passed that required them to fund pensions and fully account for the risks they bore in managing them. It proved easier to pass this risk onto employees.
This doesn’t mean that the current system is bad for workers—it’s just different. Some people are better off than they would be in a defined benefit world, while others are worse off. Rather than pine for the past and assume the sky is falling, many people could use a better sense of the retirement landscape.
The old system wasn’t so great
It’s tempting to idealize the past, whether it’s lost manufacturing jobs or supposed norms of civility, but the past is rarely as good as you remember. Defined benefit plans only covered, at their peak, 45% of workers in the US. Today, more than 60% of workers have some form of workplace retirement plan.
Defined benefit plans assume market and longevity risk, so beneficiaries are paid some fraction of their salary as long as they live, no matter how long that is or what happens to markets in the meantime. But employees had to deal with other risks. If they changed jobs before their pension vested, it was worthless. In a more dynamic economy, being tied to an employer poses risks for a worker’s salary and skills progression. Furthermore, if an employer goes bankrupt, pensions are taken over by regulators and are often slashed.
You have more than you realize
We often see scare stories from media outlets warning about how few people have saved enough for retirement. Indeed, there are many people that go into retirement without savings. Some because they did not make enough money to save, others should have saved but didn’t, and others were unlucky with their investments. According to the Federal Reserve, in 2016 about 28% of 60-to-65 year olds in the US had less than $25,000 in financial assets; 36% had that or less in 1989 (adjusted for inflation).
Many people have decent savings when they account for all their sources of income. The American Enterprise Institute’s Andrew Biggs argues that low-income Americans don’t have much retirement savings, but they get a large benefit relative to their working incomes from Social Security. And Americans who work for the government still get a defined benefit pension.
When it comes to middle-class savers, even without a defined benefit plan, the picture isn’t so bad. Estimates from the Fed suggest that average financial savings among middle earners between age 60 and 65 are about $240,000. That’s not enough for a plush retirement, but it translates to about $13,500 a year in income which, along with about $30,000 in Social Security, is well above the poverty line. And it is more than people used to have. The average middle-earning household of the same age had only saved around $180,000 (in 2016 dollars) back in 1989.
The average earner might be in decent shape, or at least a little better off than previous generations, when it comes to saving. An important caveat, though, is that people are retiring with more debt than before.
Retirement isn’t risk free
Although things may not be as bad as they seem, there is scope for improvement. A problem bigger than under-saving is that retirement policy has never addressed the consequences of forcing households to bear so much risk.
Saving enough, managing investment risk, and knowing how much you can spend in retirement is hard. The retirement industry has cracked the saving problem, making it more common to automatically sign people up for plans, which boosts savings, and put money in low-cost, well diversified index funds, which reduces some risk and eliminates excessive fees.
But there’s still the risk of outliving your money or under-spending it and living out a needlessly thrifty retirement. Many retirees don’t have a good sense of their expenses, how to invest their savings after retirement, or how much they afford to spend. Evidence suggests that even though people go into retirement with savings, they don’t spend much of it. It’s no wonder, since you don’t know how long you will live, what your health expenses will be, and what will happen to markets.
There are some potential solutions in the works for those problems. States are offering retirement accounts and proposed legislation in Congress would make it cheaper for small plans to offer 401(k)s, which will increase saving. The legislation would also encourage 401(k)s to offer an annuity option, which transfers longevity and market risk to an insurance company. Americans need more education and advice on whether an annuity is right them, what they can expect from social security, what sort of lifestyle pensions can realistically support, and how to manage the risk of over- or under-spending.
If there is a widespread retirement crisis, it is a crisis of information, not under-saving. And that can be just as bad.
Neural network implemented with light instead of electrons
Can do basic character, object recognition.
Neural networks have a reputation for being computationally expensive. But only the training portion of things really stresses most computer hardware, since it involves regular evaluations of performance and constant trips back and forth to memory to tweak the connections among its artificial neurons. Using a trained neural network, in contrast, is a much simpler process, one that isn’t nearly as computationally complex. In fact, the training and execution stages can be performed on completely different hardware.
And there seems to be a fair bit of flexibility in the hardware that can be used for either of these two processes. For example, it’s possible to train neural networks using a specialized form of memory called a memristor or execute trained neural networks using custom silicon chips. Now, researchers at UCLA have done something a bit more radical. After training a neural network using traditional computing hardware, they 3D printed a set of panels that manipulated light in a way that was equivalent to processing information using the neural network. In the end, they got performance at the speed of light—though with somewhat reduced accuracy compared to more traditional hardware.
Lighten up
So how do you implement a neural network using light? To understand that, you have to understand the structure of a deep-learning neural network. In each layer, signals from an earlier one (or the input from a source) are processed by “neurons,” which then take the results and forward signals on to neurons in the next layer. Which neurons they send it to and how strong a signal they pass on are determined by the training they’ve undergone.
To do this with light, the UCLA team created a translucent, refracting surface. As light hits it, the precise structure of the surface determines how much light passes through and where it’s directed to. If you place another, similar layer behind the first, it’ll continue to redirect the light to specific locations. This is similar in principle to how the deep-learning network works, with each layer of the network redirecting signals to specific locations in the layer beyond.
In practical terms, the researchers trained a neural network, identified the connections it made to the layer beneath it, and then translated those into surface features that would direct light in a similar manner. By printing a series of these layers, the light would be gradually concentrated in a specific area. By placing detectors in specific locations behind the final layer, they’re able to tell where the light ends up. And, if everything’s done properly, where the light ends up should tell us the neural network’s decision.
The authors tried this with two different types of image-recognition tasks. In the first, they trained the neural network to recognize hand-written numbers, and then they translated and printed the appropriate screens for a grid of 10 photodetectors to record the output. This was done with a five-layer neural network, and the researchers duly printed out five layers of light-manipulating material. To provide the neural network with input, they also printed a sheet that allowed them to project the objects being recognized onto the first layer of the neural network.
Layering
When the UCLA researchers did this with hand-written digits, they ran into a problem: many digits (like 0 and 9) have open areas surrounded by the written portion of the digit. To 3D print a mask for projecting light in the shape of the digit, this has to be converted to a negative, with a filled-in area surrounded by open space. And that’s pretty difficult to 3D print, since at least some material has to be used to keep the keep the filled-in area attached to the rest of the screen. This, they suspect, lowered the accuracy of the identification task. Still, they managed over 90-percent accuracy.
They did even better when performing a similar test with items of clothing. While the total accuracy was only 86 percent, the difference between running the neural network in software and running it using light was smaller. The researchers suspect the differences in performance mostly come down to the fact that full performance requires an extremely accurate alignment among all the layers of the neural network, and that’s hard to arrange when the layers are small physical sheets.
This may also explain why adding more layers to the light-based neural network had a very modest impact on accuracy.
Overall, it’s extremely impressive that this works at all. Matching the wavelengths and materials to get the light to bend in the right ways, 3D printing within sufficiently high tolerances to recapitulate the trained neural network—a lot had to come together to get this to work. And while performance is down compared to computer-based implementations, the researchers suspect that at least some of the problems are things that can be fixed by developing a better system to align the sheets that make up the different layers of the network, although that’s a challenge that should increase with the number of layers in the neural network.
And the authors seem to think it might be practically useful. They highlight the fact that calculations using light are extremely fast and most light sources are very low power.
But there are definitely some practical hurdles. The material is specific to a single wavelength of light, meaning that we can’t just stick anything in front of the system and expect it to work. Right now, that’s ensured by the projection system, but that relies on 3D printing a sheet to project a specific shape, which isn’t exactly a time-efficient process. Replacing these with a sort of monochrome projector system should be possible, but it’s not clear how much resolution matters to the system’s accuracy. So there’s some work to do before we’ll know if this sort of system could have practical applications.
Tesla Solar Will Be “Aggressively Ramping” — Stay Tuned
With so much emphasis on the Model 3, the question on many people’s mind is, “Will Tesla’s solar efforts finally begin to gain traction?” According to USA Today, “The company is going to double down on selling solar through its 110 US Tesla stores, which it has reconfigured in past months to ensure that Powerwall is on prominent display and in-house solar experts, some of whom worked Home Depot aisles for Tesla, are at the ready.”
Solar panel sales are being actively promoted within Tesla’s own stores now (Image: Tesla)
In an interview with USA Today, JB Straubel, Tesla’s co-founder and CTO says, “No one should see us as stepping back from solar. In fact, it’s the opposite. It’s like with Model 3 — people have come flooding in and are waiting on the product. So now we’re aggressively ramping our capacity.”
Tesla has eliminated door-to-door solar sales and is currently phasing out its partnership (selling its solar products) with Home Depot. Instead, Tesla is consolidating its external solar sales efforts and bringing it all in-house. Straubel explains, “We’re focused intently on the customer experience, not on having a higher market share. … We’re looking at the bigger picture.”
To accompany Tesla’s solar sales, the company is also pushing its Powerwall home energy storage product. Although Tesla wouldn’t disclose the level of demand for Powerwall, Straubel admits “it’s a big success,” particularly in Australia and Europe, and its deliveries currently prioritize customers who have purchased both solar and Powerwall.
Solar roof tiles cover the garage of a Model S owner parked next to two Tesla Powerwalls (Image: Tesla)
Plans are in place for a production push right now to produce Tesla’s solar panels. However, Straubel notes that the production ramp for Tesla’s solar roof tiles are further out — expected to accelerate next year. USA Today notes, “All of that means if you’re interested in going solar with Tesla right now, you’ll have to be patient.”
Further delaying matters, “some of [Tesla’s] current solar capacity also has been siphoned off by high-profile efforts such as helping Puerto Rico switch to solar power in the wake of last year’s hurricane.”
Nevertheless, Tesla’s fans are anxiously awaiting the company’s solar offerings. USA Todayspoke with solar energy blogger Jon Weaver, who says, “the feeling is if you want the hottest and latest stuff, you have to go with Elon. The hope is still that he’ll deliver all of this at a price that works for the masses.”
A look at Tesla’s solar panels and Powerwall (Image:Tesla)