http://architosh.com/2017/02/will-apple-forego-intel-why-amds-new-ryzen-chips-look-smart-for-future-macs/

Will Apple Forego Intel?—Why AMD’s New RYZEN Chips Look Smart for Future Macs

Apple’s Mac design philosophy change hasn’t been lost on the company’s pro users. AMD’s new RYZEN 7 CPUs look to promise more computing muscle per watt than Intel and offer a nicer philosophical fit for Apple. The 1700, in particular, is ideal for a future Mac Pro.

Intel may have just lost the performance crown for microprocessors for PCs. Against a backdrop of positivity for AMD these days, the new AMD RYZEN 7 1800X just set a new world record score for Cinebench, a respected CPU performance benchmark.

Yet, the story isn’t really about sheer performance but that AMD’s new chip line delivers stunning performance per watt compared to Intel’s i7. And that’s where this gets interesting for Apple. You see, for more than a half a decade now Apple’s design philosophy for devices has shifted to being more about economy of energy consumption over maximum performance.

This is where there seems to be better alignment. AMD’s new chip is extremely Apple-esque.

The Apple-Esqueness of RYZEN 7

The new RYZEN 7 chip lineup includes three new chips: the 1800X, 1700X, and 1700. They roughly compete with Intel’s i7 line including the i7-6900K, i7-6800K, and i7-7700K, respectively. While all three deliver similar performance, according to AMD, they do so at much smaller wattages.

Key Takeaway

Apple’s designs are all about thinness, lightness and long battery life and thus they prefer microprocessor selections based on energy requirements rather than shear performance. But AMD’s new RYZEN 7 chips excel in exactly this area, outclassing Intel’s i7’s in “power per watt” and beating their pants in price too. This could be a game-changer.

The 1800X and 1700X both have TDP’s (thermal design points) of 95W (watts) compared to 140W for their i7 competition. That’s more than a 30% savings in power draw. But the big kicker is the cost at the high end. The RYZEN 7 1800X is just $499 per chip compared to $1,089 for Intel. While the mid-range and lower end (1700) models save less money the lower range unit, in particular, delivers twice the cores (8), twice the threads (16), and is also 30% less in energy consumption.

01 - AMD RYZEN 7 CPUs perform equal or better than Intel's finest, says company but deliver that performance in far smaller wattages.

01 – AMD RYZEN 7 CPUs perform equal or better than Intel’s finest, says company but deliver that performance in far smaller wattages. On average they draw 30% less energy making them ideal for Apple’s minimal TDP hardware designs.

These are big advantages even if independent third-party tests show the performance is slightly less than what AMD is touting. What matters these days for Apple is designing around minimal TDPs.

Preserving the “Conserving Energy” Apple Ethos

A Forbes article about the new Macbook Pro noted that Apple is designing for the needs of conserving energy and less about performance. “Here’s how Apple deftly puts it in their ad copy,” writes Brooke Crothers, ” ‘Intel processors deliver pro-level processing performance while conserving energy.’ ”

But where is this “pro-level” performance? Apple’s new Macbook Pro really isn’t particularly faster than the model it replaced. As the Forbes article points out, Apple seriously traded down wattage in the latest update, going from a 28W 5th-gen “Broadwell” part to a 15W 6th-gen “Skylake” part in the entry model.

Some have suggested Apple would at some point use its own A-series (ARM-based) processors in future Macs, since they are the ultimate in performance per watt. But such a direction would have serious consequences for macOS and its developers, forcing them to make another microprocessor architecture change. If Apple’s power envelope constraints are truly that important, than why not consider AMD’s new RYZEN 7 chips?

02 - AMD's new RYZEN promises a return to true competition in the CPU arena, says multiple leading analysts and experts.

02 – AMD’s new RYZEN promises a return to true competition in the CPU arena, says multiple leading analysts and experts. (image: AMD)

Take a look at today’s current iMac 5K model. It uses an Intel 14nm “Skylake” 4-core i7-6700K processor with a TDP of 91W. Based on the iMac’s actual frequency (Apple regularly dials them down) the TDP is possibly low 80’s W range. But AMD’s RYZEN 7 1700 is at 65W already and has 8 cores.

An Apple and AMD Get-Together?

For Apple to go with AMD after years of embracing Intel seems questionable. The prestige of “Intel-inside” and the benefits of working with the industry leader seem locked in. But on the flip-side Apple’s own PC rivals could gain notable advantages and ruin the Cupertino company’s ability to drop those famous marketing bombs about thinnest and lightest and battery life.

What would Apple do without those marketing bullet points?

The PC market is also rebounding and growing in areas that are typically Apple’s turf, like high-end personal computers and more expensive laptops. It should be said too that Apple chose AMD’s new Polaris graphics over Nvidia’s goods in the latest new Macbook Pro because it was simply a good fit for Apple’s thin design. And Canalys analyst Daniel Matte stated for Forbes that “Apple’s desires probably informed AMD’s [Polaris] roadmap.”

Apple has been about thin design and tight TDPs for a long time now. Is it possible that when AMD decided to work from a clean state four years ago that they began the RYZEN line with the possibility of wooing Apple at some point in the future? Wouldn’t getting Apple be a huge win for them?

It certainly would. Whether Apple is tempted to explore a new dancing partner, however, is anyone’s best guess.

 

http://www.smh.com.au/technology/technology-news/razer-blade-stealth-review-beautiful-ultrabook-gets-cheaper-more-powerful-20170223-gujjkh.html

Razer Blade Stealth review: beautiful ultrabook gets cheaper, more powerful

The original Razer Blade Stealth made an impact as a 12.5-inch Windows laptop that seemed determined to match or beat Apple’s 12-inch MacBook at every turn — from specs to design to portability to price — and in my opinion it succeeded.

The recently-released refreshed version of the machine pushes the boundaries of the small form factor even further, and also manages to do so at a lower price point.

Starting at $1499.95, the new Stealth looks identical to the old one. Combining a sleek, super-sturdy black metal frame with individually lit RGB keys, it’s up to you if you want to keep things stealthy with plain white, green or any other colour keys, or you can go nuts with bright neons, rainbow waves or reactive animations as you type.

It’s a gimmick you usually only see on keyboards and laptops meant specifically for gaming, but it does wonders for productivity too (not to mention aesthetics). For example you can easily set the keys to change their lighting pattern so that every time you open Photoshop your favourite shortcut keys are colour-coded.

Once again keys are spaced nicely and are comfortable to type on, but only as far as you can expect on such a light and thin ultrabook. The stealth weighs in at 1.29kg and is a mere 13mm thick, and consequently key travel is shallow enough that long sessions of typing had me feeling some fatigue. The touchpad is a similar story, it feels great and works perfectly but bigger would be nicer. Thankfully the screen is multi-touch so even if you’re not packing a mouse you can get around OK.

Elsewhere you still have the two standard USB 3.0 ports plus a full sized HDMI, and a Thunderbolt 3 port for fast charging, DisplayPort or USB-C devices.

Inside, the machine follows the exact philosophy of its 2016 forbear but with a few slight upgrades in just the right places.

It still runs an Intel i7 processor, but this time it’s a 7th Gen Kaby Lake chip. It’s clocked slightly higher than last year’s CPU at 2.7Ghz (turbo to 3.5GHz). Subsequently it also packs integrated Intel graphics (like pretty much all ultrabooks), but it’s had a bit of a bump from the HD 520 to the HD 620. All models of the Stealth now come with a healthy 16GB of RAM, making even the entry level $1500 unit a force to be reckoned with.

In fact the only difference between the $1500, $1850 and $2100 versions of this machine is the amount of in-built storage you get (128GB, 256GB or 512GB respectively).

There is also an option to get the Stealth with a spectacular 4K screen, which achieves 100 per cent of Adobe RGB, in a 512GB model ($2400) or a 1TB model ($3000). But given the battery life on my QHD review unit struggles to make it to seven hours of continuous use, I’d only suggest you go 4K if you absolutely need the extra resolution or colour accuracy it brings, because it will drain your power.

So what does the machine’s extra grunt mean in real world terms? Well for starters the Kaby Lake makes this thing a multi-tasking beast, with the ability to handle a number of browser tabs that would have buckled last year’s model. All your standard productivity and entertainment tasks — like office software and streaming video — are buttery smooth.

Without a dedicated GPU, an ultrabook is probably not for you if you need to do graphics-intensive work or are looking for a portable gaming station, but within its category the Stealth does very well. If all you want to do is play a few rounds of Overwatch, and you’re happy to crank the settings down, there’s more than enough grunt here.

In other territories Razer offers a product known as the Core, which fits a standard desktop graphics card and can plug seamlessly into the stealth for serious graphics power when you’re at home. Unfortunately there’s still no word on when the Core might make it down under.

Overall, the reduction in price and increase in specs has pushed the Stealth to the head of the pack when it comes to small laptops, with the 256GB Stealth (at $1850) weighing up favourably with comparable Dell XPS 13 and MacBook models.

The cheapest XPS 13 model that comes with a 7th Gen i7 costs $2299, and it packs half the RAM of the Stealth (8GB) and no touchscreen. Meanwhile the $1999 MacBook, also with 8GB of RAM, comes with a much slower Core M3 mobile chipset. Even the 256GB 13-inch MacBook Pro packs an i5 chipset and 8GB of RAM, and costs $2699.

As with last time, the only real achilles heel of the Stealth is its battery life, which Razer has clearly sacrificed at the altar of more powerful internals. You’re looking at eight hours maximum which, to put it bluntly, is below average for this product category. You can kill it in two if you try.

The company has recently announced a huge battery pack that can charge your laptop and phone, which might boost your Stealth’s life to 15 hours, but that kind of defeats the purpose of a super-light laptop.

Being cheaper and more powerful than the competition, the beautiful Stealth represents a great value. But if getting a whole working day out of your lightweight travel computer without plugging in is important to you, this might be a stretch.

http://www.wired.co.uk/article/deepmind-nhs-ai-kidney-royal-free

DeepMind’s Streams app is reportedly ‘saving NHS nurses two hours a day’

The data-processing deal has proved controversial with privacy advocates

Normal ultrasound scan of the left kidney
Normal ultrasound scan of the left kidney
Getty Images / Media for Medical / Contributor

DeepMind, Google’s London-based AI arm, has provedcontroversial with its data sharing deals with the NHS but the NHS Royal Free London hospital, which has been using the firm’s app to detect early signs of kidney failure, has come out in defence saying the technology is saving staff time.

In November 2016, a revised deal between the NHS and DeepMind was published that outlined how the artificial intelligence company would use patient data to deliver early warning signals as part of a five-year contract.

The system in use is called Streams and sees patient data being scanned by the app to predict when an acute kidney injury (AKI) is likely to occur. “Within a few weeks of being introduced, nurses who have been using Streams report it has been saving them up to two hours every day, which means they can spend more time face-to-face with patients,” the Royal Free said in a statement.

It is claimed that more than 26 doctors and nurses at the hospital are using the app and it is “alerting” them up to 11 times per day of patients at risk. In December, a second NHS hospital signed-up to work with DeepMind’s Streams app.

In one example highlighted by the Royal Free hospital a patient, Afia Ahmed, 38, from Hampstead, had a problem with her kidney function highlighted by the app and it is said the doctors were then able to medicate the condition before it developed any further.

NHS figures say acute kidney disease costs the NHS more than £1 billion per year. The Royal Free added that more clinicians will be using the technology in the coming months and alerts for sepsis and organ failure will also be introduced. The NHS is paying DeepMind for its technology, but has refused to say how much.

“On one day this week, the app alerted us to 11 patients, ranging from a young cancer patient to an elderly patient suffering life-threatening dehydration, who were at risk of developing AKI,” Sarah Stanley, a consultant nurse at the Royal Free said in a statement.

The contract deal between the Royal Free and the Google-firm wasinitially revealed by the New Scientist in April 2016 and saw heavy criticism over how much patient data was being shared.

SUBSCRIBE TO WIRED

In November, it was highlighted the DeepMind contract would see data from 1.6 million NHS patients shared each year and historical data from five years being passed to the Google-arm. Mustafa Suleyman, a DeepMind co-founder, told WIRED it was unfair to single his company out and the type of data-sharing it was doing was standard across other types of NHS contracts.

“They have listened reasonably to what objections have been made and they do try to address some of them, but entirely on [DeepMind’s] terms,” added Eerke Boiten, director of Centre for Cyber Security Research at the University of Kent told WIRED.

At the time the UK’s information commissioner, which is in charge of data protection regulations, confirmed an investigation into the data-sharing agreement was ongoing. Since then, the information commissioner has not announced any results of its findings.

http://www.kurzweilai.net/why-you-should-eat-10-portions-of-fruit-or-vegetables-a-day

Why you should eat 10 portions of fruit or vegetables a day

February 24, 2017

image credit | iStock

Eating 800 grams a day (about ten portions*) of fruit or vegetables could reduce your chance of heart attack, stroke, cancer, and early death, scientists from Imperial College London conclude from a meta-analysis of 95 studies on fruit and vegetable intake.

The study, published in an open-access paper in the International Journal of Epidemiology, included 2 million people worldwide and assessed up to 43,000 cases of heart disease, 47,000 cases of stroke, 81,000 cases of cardiovascular disease, 112,000 cancer cases and 94,000 deaths.

About 7.8 million premature deaths worldwide could be potentially prevented yearly if people followed this protocol, the researchers say.

Compared to not eating any fruits and vegetables, a daily intake of 200 grams (two and a half portions) was associated with a 16% reduced risk of heart disease, an 18% reduced risk of stroke, a 13% reduced risk of cardiovascular disease, a 4% reduced risk in cancer risk, and a 15% reduction in the risk of premature death.

However, a higher intake of fruits and vegetables of 800 grams a day was associated with 24% reduced risk of heart disease, a 33% reduced risk of stroke, a 28% reduced risk of cardiovascular disease, a 13% reduced risk of total cancer** and a 31% reduction in dying prematurely.***

The current UK guidelines suggest you eat at least five portions or 400 grams per day, but fewer than one in three UK adults are thought to even meet this target. The U.S. Health and Human Services/USDA guidelines use a different metric: “The recommended amount of vegetables in the Healthy U.S.-Style Eating Pattern at the 2,000-calorie level is 2½ cup-equivalents of vegetables per day and 2 cup-equivalents of fruit per day.


Foods that are best at disease prevention, according to the study

To prevent heart disease, stroke, cardiovascular disease, and early death: apples, pears, citrus fruits, salads, and green leafy vegetables such as spinach, lettuce and chicory, and cruciferous vegetables such as broccoli, cabbage and cauliflower.

To reduce cancer risk: green vegetables, such as spinach or green beans, yellow vegetables, such as peppers and carrots, and cruciferous vegetables.


Reasons for health benefits

So why do fruit and vegetables have such profound health benefits? According to Dagfinn Aune, PhD, lead author of the research, from the School of Public Health at Imperial: “Fruit and vegetables have been shown to reduce cholesterol levels, blood pressure, and to boost the health of our blood vessels and immune system. This may be due to the complex network of nutrients they hold. For instance they contain many antioxidants, which may reduce DNA damage, and lead to a reduction in cancer risk.”

He also noted that compounds called glucosinolates in cruciferous vegetables, such as broccoli, activate enzymes that may help prevent cancer. And fruit and vegetables may also have a beneficial effect on the naturally-occurring bacteria in our gut.

image credit | iStock

Most beneficial compounds can’t be easily replicated in a pill, he said: “Most likely it is the whole package of beneficial nutrients you obtain by eating fruits and vegetables that is crucial is health.

“This is why it is important to eat whole plant foods to get the benefit, instead of taking antioxidant or vitamin supplements, which have not been shown to reduce disease risk.”

In the paper, the researchers qualify these statements, noting that they assume the observed associations are causal (there could be other causes of improved health). The team, however, took into account some other factors, such as a person’s weight, smoking, physical activity levels, and overall diet.

“We need further research into the effects of specific types of fruits and vegetables and preparation methods of fruit and vegetables,” Aune suggested. “We also need more research on the relationship between fruit and vegetable intake with causes of death other than cancer and cardiovascular disease. However, it is clear from this work that a high intake of fruit and vegetables hold tremendous health benefits, and we should try to increase their intake in our diet.”

This project was funded by Olav og Gerd Meidel Raagholt’s Stiftelse for Medisinsk Forskning, the Liaison Committee between the Central Norway Regional Health Authority (RHA) and the Norwegian University of Science and Technology (NTNU), and the Imperial College National Institute of Health Research (NIHR) Biomedical Research Centre (BRC).

* A portion (80 grams) of fruit equals approximately one small banana, apple, pear or large mandarin; three heaped tablespoons of cooked vegetables such as spinach, peas, broccoli or cauliflower count as one portion.

** For cancer, no further reductions in risk were observed above 600 grams per day.

*** The team was not able to investigate intakes greater than 800 g a day. The team also did not find significant differences between raw and cooked vegetables in relation to early death, and they noted that that other specific fruits and vegetables as well as preparation methods may also play a role.


image credit | iStock

Abstract of Fruit and vegetable intake and the risk of cardiovascular disease, total cancer and all-cause mortality–a systematic review and dose-response meta-analysis of prospective studies

Background: Questions remain about the strength and shape of the dose-response relationship between fruit and vegetable intake and risk of cardiovascular disease, cancer and mortality, and the effects of specific types of fruit and vegetables. We conducted a systematic review and meta-analysis to clarify these associations.

Methods: PubMed and Embase were searched up to 29 September 2016. Prospective studies of fruit and vegetable intake and cardiovascular disease, total cancer and all-cause mortality were included. Summary relative risks (RRs) were calculated using a random effects model, and the mortality burden globally was estimated; 95 studies (142 publications) were included.

Results: For fruits and vegetables combined, the summary RR per 200 g/day was 0.92 [95% confidence interval (CI): 0.90–0.94, I2 = 0%, n = 15] for coronary heart disease, 0.84 (95% CI: 0.76–0.92, I2 = 73%, n = 10) for stroke, 0.92 (95% CI: 0.90–0.95, I2 = 31%, n = 13) for cardiovascular disease, 0.97 (95% CI: 0.95–0.99, I2 = 49%, n = 12) for total cancer and 0.90 (95% CI: 0.87–0.93, I2 = 83%, n = 15) for all-cause mortality. Similar associations were observed for fruits and vegetables separately. Reductions in risk were observed up to 800 g/day for all outcomes except cancer (600 g/day). Inverse associations were observed between the intake of apples and pears, citrus fruits, green leafy vegetables, cruciferous vegetables, and salads and cardiovascular disease and all-cause mortality, and between the intake of green-yellow vegetables and cruciferous vegetables and total cancer risk. An estimated 5.6 and 7.8 million premature deaths worldwide in 2013 may be attributable to a fruit and vegetable intake below 500 and 800 g/day, respectively, if the observed associations are causal.

Conclusions: Fruit and vegetable intakes were associated with reduced risk of cardiovascular disease, cancer and all-cause mortality. These results support public health recommendations to increase fruit and vegetable intake for the prevention of cardiovascular disease, cancer, and premature mortality.

http://www.kurzweilai.net/brain-computer-interface-advance-allows-paralyzed-people-to-type-almost-as-fast-as-some-smartphone-users

Brain-computer interface advance allows paralyzed people to type almost as fast as some smartphone users

Coming next: controlling personal computers, phones, and tablets — and reaching out via the internet
February 24, 2017

Typing with your mind. You are paralyzed. But now, tiny electrodes have been surgically implanted in your brain to record signals from your motor cortex, the brain region controlling muscle movement. As you think of mousing over to a letter (or clicking to choose it), those electrical brain signals are transmitted via a cable to a computer (replacing your spinal cord and muscles). There, advanced algorithms decode the complex electrical brain signals, converting them instantly into screen actions. (credit: Chethan Pandarinath et al./eLife)

Stanford University researchers have developed a brain-computer interface (BCI) system that can enable people with paralysis* to type (using an on-screen cursor) at speeds and accuracy levels of about three times faster than reported to date.

Simply by imagining their own hand movements, one participant was able to type 39 correct characters per minute (about eight words per minute); the other two participants averaged 6.3 and 2.7 words per minute, respectively — all without auto-complete assistance (so it could be much faster).

Those are communication rates that people with arm and hand paralysis would also find useful, the researchers suggest. “We’re approaching the speed at which you can type text on your cellphone,” said Krishna Shenoy, PhD, professor of electrical engineering, a co-senior author of the study, which was published in an open-access paper online Feb. 21 in eLife.

Braingate and beyond

The three study participants used a brain-computer interface called the “BrainGate Neural Interface System.” On KurzweilAI, we first discussed Braingate in 2011, followed by a 2012 clinical trial that allowed a paralyzed patient tocontrol a robot.

Braingate in 2012 (credit: Brown University)

The new research, led by Stanford, takes the Braingate technology way further**. Participants can now move a cursor (by just thinking about a hand movement) on a computer screen that displays the letters of the alphabet, and they can “point and click” on letters, computer-mouse-style, to type letters and sentences.

The new BCI uses a tiny silicon chip, just over one-sixth of an inch square, with 100 electrodes that penetrate the brain to about the thickness of a quarter and tap into the electrical activity of individual nerve cells in the motor cortex.

As the participant thinks of a specific hand-to-mouse movement (pointing at or clicking on a letter), neural electrical activity is recorded using 96-channel silicon microelectrode arrays implanted in the hand area of the motor cortex. These signals are then filtered to extract multiunit spiking activity and high-frequency field potentials, then decoded (using two algorithms) to provide “point-and-click” control of a computer cursor.

What’s next

The team next plans is to adapt the system so that brain-computer interfaces can control commercial computers, phones and tablets — perhaps extending out to the internet.

Beyond that, Shenoy predicted that a self-calibrating, fully implanted wireless BCI system with no required caregiver assistance and no “cosmetic impact” would be available in five to 10 years from now (“closer to five”).

Perhaps a future wireless, noninvasive version could let anyone simply think to select letters, words, ideas, and images — replacing the mouse and finger touch — along the lines of Elon Musk’s neural lace concept?

* Millions of people with paralysis reside in the U.S.

** The study’s results are the culmination of the long-running multi-institutional BrainGate consortium, which includes scientists at Massachusetts General Hospital, Brown University, Case Western University, and the VA RehabilitationResearch and Development Center for Neurorestoration and Neurotechnology in Providence, Rhode Island. The study was funded by the National Institutes of Health, the Stanford Office of Postdoctoral Affairs, the Craig H. NeilsenFoundation, the Stanford Medical Scientist Training Program, Stanford BioX-NeuroVentures, the Stanford Institute for Neuro-Innovation and Translational Neuroscience, the Stanford Neuroscience Institute, Larry and Pamela Garlick, Samuel and Betsy Reeves, the Howard Hughes Medical Institute, the U.S. Department of Veterans Affairs, the MGH-Dean Institute for Integrated Research on Atrial Fibrillation and Stroke and Massachusetts General Hospital.


Stanford | Stanford researchers develop brain-controlled typing for people with paralysis


Abstract of High performance communication by people with paralysis using an intracortical brain-computer interface

Brain-computer interfaces (BCIs) have the potential to restore communication for people with tetraplegia and anarthria by translating neural activity into control signals for assistive communication devices. While previous pre-clinical and clinical studies have demonstrated promising proofs-of-concept (Serruya et al., 2002; Simeral et al., 2011; Bacher et al., 2015; Nuyujukian et al., 2015; Aflalo et al., 2015; Gilja et al., 2015; Jarosiewicz et al., 2015; Wolpaw et al., 1998; Hwang et al., 2012; Spüler et al., 2012; Leuthardt et al., 2004; Taylor et al., 2002; Schalk et al., 2008; Moran, 2010; Brunner et al., 2011; Wang et al., 2013; Townsend and Platsko, 2016; Vansteensel et al., 2016; Nuyujukian et al., 2016; Carmena et al., 2003; Musallam et al., 2004; Santhanam et al., 2006; Hochberg et al., 2006; Ganguly et al., 2011; O’Doherty et al., 2011; Gilja et al., 2012), the performance of human clinical BCI systems is not yet high enough to support widespread adoption by people with physical limitations of speech. Here we report a high-performance intracortical BCI (iBCI) for communication, which was tested by three clinical trial participants with paralysis. The system leveraged advances in decoder design developed in prior pre-clinical and clinical studies (Gilja et al., 2015; Kao et al., 2016; Gilja et al., 2012). For all three participants, performance exceeded previous iBCIs (Bacher et al., 2015; Jarosiewicz et al., 2015) as measured by typing rate (by a factor of 1.4–4.2) and information throughput (by a factor of 2.2–4.0). This high level of performance demonstrates the potential utility of iBCIs as powerful assistive communication devices for people with limited motor function.

http://www.kurzweilai.net/a-breakthrough-low-power-artificial-synapse-for-neural-network-computing

An ultra-low-power artificial synapse for neural-network computing

Brain-like device with 500 states instead of binary could one day communicate with live neurons, merging computers with the brain
February 24, 2017

(Left) Illustration of a synapse in the brain connecting two neurons. (Right) Schematic of artificial synapse (ENODe), which functions as a transistor. It consists of two thin, flexible polymer films (black) with source, drain, and gate terminals, connected by an electrolyte of salty water that permits ions to cross. A voltage pulse applied to the “presynaptic” layer (top) alters the level of oxidation in the “postsynaptic layer” (bottom), triggering current flow between source and drain. (credit: Thomas Splettstoesser/CC and Yoeri van de Burgt et al./Nature Materials)

Stanford University and Sandia National Laboratories researchers have developed an organic artificial synapse based on a new memristor (resistive memory device) design that mimics the way synapses in the brain learn. The new artificial synapse could lead to computers that better recreate the way the human brain processes information. It could also one day directly interface with the human brain.

The new artificial synapse is an electrochemical neuromorphic organic device (dubbed “ENODe”) — a mixed ionic/electronic design that is fundamentally different from existing and other proposed resistive memory devices, which are limited by noise, required high write voltage, and other factors*, the researchers note in a paper published online Feb. 20 in Nature Materials.

Like a neural path in a brain being reinforced through learning, the artificial synapse is programmed by discharging and recharging it repeatedly. Through this training, the researhers have been able to predict within 1 percent of uncertainly what voltage will be required to get the synapse to a specific electrical state and, once there, remain at that state.

“The working mechanism of ENODEs is reminiscent of that of natural synapses, where neurotransmitters diffuse through the cleft, inducing depolarization due to ion penetration in the postsynaptic neuron,” the researchers explain in the paper. “In contrast, other memristive devices switch by melting materials at relatively high temperatures (PCMs) or by voltage-induced breakdown/filament formation and ion diffusion in dense oxide layers (FFMOs).”

The ENODe achieves significant energy savings** in two ways:

  • Unlike a conventional computer, where you save your work to the hard drive before you turn it off, the artificial synapse can recall its programming without any additional actions or parts. Traditional computing requires separately processing information and then storing it into memory. Here, the processing creates the memory.
  • When we learn, electrical signals are sent between neurons in our brain. The most energy is needed the first time a synapse is traversed. Every time afterward, the connection requires less energy. This is how synapses efficiently facilitate both learning something new and remembering what we’ve learned. The artificial synapse, unlike most other versions of brain-like computing, also fulfills these two tasks simultaneously, and does so with substantial energy savings.

“More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry,” said A. Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and co-senior author of the paper. “We’ve demonstrated a device that’s ideal for running these type of algorithms and that consumes a lot less power.”

A future brain-like computer with 500 states

Only one artificial synapse has been produced so far, but researchers at Sandia used 15,000 measurements to simulate how an array of them would work in a neural network. They tested the simulated network’s ability to recognize handwriting of digits 0 through 9. Tested on three datasets, the simulated array was able to identify the handwritten digits with an accuracy between 93 to 97 percent.

This artificial synapse may one day be part of a brain-like computer, which could be especially useful for processing visual and auditory signals, as in voice-controlled interfaces and driverless cars, but without energy-consuming computer hardware.

This device is also well suited for the kind of signal identification and classification that traditional computers struggle to perform. Whereas digital transistors can be in only two states, such as 0 and 1, the researchers successfully programmed 500 states in the artificial synapse, which is useful for neuron-type computation models. In switching from one state to another they used about one-tenth as much energy as a state-of-the-art computing system needs to move data from the processing unit to the memory.

However, this is still about 10,000 times as much energy as the minimum a biological synapse needs in order to fire**. The researchers hope to attain neuron-level energy efficiency once they test the artificial synapse in smaller devices.

Linking to live organic neurons

This new artificial synapse may one day be part of a brain-like computer, which could be especially beneficial for computing that works with visual and auditory signals. Examples of this are seen in voice-controlled interfaces and driverless cars. Past efforts in this field have produced high-performance neural networks supported by artificially intelligent algorithms but these depend on energy-consuming traditional computer hardware.

Every part of the device is made of inexpensive organic materials. These aren’t found in nature but they are largely composed of hydrogen and carbon and are compatible with the brain’s chemistry. Cells have been grown on these materials and they have even been used to make artificial pumps for neural transmitters. The switching voltages applied to train the artificial synapse (about 0.5 mV) are also the same as those that move through human neurons — about 1,000 times lower than the “write” voltage for a typical memristor.

That means it’s possible that the artificial synapse could communicate with live neurons, leading to improved brain-machine interfaces. The softness and flexibility of the device also lends itself to being used in biological environments.

This research was funded by the National Science Foundation, the Keck Faculty Scholar Funds, the Neurofab at Stanford, the Stanford Graduate Fellowship, Sandia’s Laboratory-Directed Research and Development Program, the U.S. Department of Energy, the Holland Scholarship, the University of Groningen Scholarship for Excellent Students, the Hendrik Muller National Fund, the Schuurman Schimmel-van Outeren Foundation, the Foundation of Renswoude (The Hague and Delft), the Marco Polo Fund, the Instituto Nacional de Ciência e Tecnologia/Instituto Nacional de Eletrônica Orgânica in Brazil, the Fundação de Amparo à Pesquisa do Estado de São Paulo and the Brazilian National Council.

* “A resistive memory device has not yet been demonstrated with adequate electrical characteristics to fully realize the efficiency and performance gains of a neural architecture. State-of-the-art memristors suffer from excessive write noise, write non-linearities, and high write voltages and currents.  Reducing the noise and lowering the switching voltage significantly below 0.3 V (~10 kT) in a two-terminal device without compromising long-term data retention has proven difficult.” … Organic memristive devices have been recently proposed, but are limited by “the slow kinetics of ion diffusion through a polymer to retain their states or on charge storage in metal nanoparticles, which inherently limits performance and stability.” — Yoeri van de Burgt et al., Nature Materials

** ENODe switches at low voltage and energy (< 10 pJ for 1000-square-micrometer devices), compared to an estimated ∼ 1–100 fJ per synaptic event for the human brain.
 

Abstract of A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing

The brain is capable of massively parallel information processing while consuming only ~1–100 fJ per synaptic event. Inspired by the efficiency of the brain, CMOS-based neural architectures and memristors are being developed for pattern recognition and machine learning. However, the volatility, design complexity and high supply voltages for CMOS architectures, and the stochastic and energy-costly switching of memristors complicate the path to achieve the interconnectivity, information density, and energy efficiency of the brain using either approach. Here we describe an electrochemical neuromorphic organic device (ENODe) operating with a fundamentally different mechanism from existing memristors. ENODe switches at low voltage and energy (<10 pJ for 103 μm2 devices), displays >500 distinct, non-volatile conductance states within a ~1 V range, and achieves high classification accuracy when implemented in neural network simulations. Plastic ENODes are also fabricated on flexible substrates enabling the integration of neuromorphic functionality in stretchable electronic systems. Mechanical flexibility makes ENODes compatible with three-dimensional architectures, opening a path towards extreme interconnectivity comparable to the human brain.

http://www.680news.com/2017/02/26/sharp-vision-new-glasses-help-the-legally-blind-see/

Sharp vision: New glasses help the legally blind see

SAN FRANCISCO – Jeff Regan was born with underdeveloped optic nerves and had spent most of his life in a blur. Then four years ago, he donned an unwieldy headset made by a Toronto company called eSight.

Suddenly, Regan could read a newspaper while eating breakfast and make out the faces of his co-workers from across the room. He’s been able to attend plays and watch what’s happening on stage, without having to guess why people around him were laughing.

“These glasses have made my life so much better,” said Regan, 48, a Canadian engineer who lives in London, Ontario.

The headsets from eSight transmit images from a forward-facing camera to small internal screens — one for each eye — in a way that beams the video into the wearer’s peripheral vision. That turns out to be all that some people with limited vision, even legal blindness, need to see things they never could before. That’s because many visual impairments degrade central vision while leaving peripheral vision largely intact.

Although eSight’s glasses won’t help people with total blindness, they could still be a huge deal for the millions of peoples whose vision is so impaired that it can’t be corrected with ordinary lenses.

EYE TEST

But eSight still needs to clear a few minor hurdles.

Among them: proving the glasses are safe and effective for the legally blind. While eSight’s headsets don’t require the approval of health regulators — they fall into the same low-risk category as dental floss — there’s not yet firm evidence of their benefits. The company is funding clinical trials to provide that proof.

The headsets also carry an eye-popping price tag. The latest version of the glasses, released in mid-February, sells for about $10,000. While that’s $5,000 less than its predecessor, it’s still a lot for people who often have trouble getting high-paying jobs because they can’t see.

Insurers won’t cover the cost; they consider the glasses an “assistive” technology similar to hearing aids.

ESight CEO Brian Mech said the latest improvements might help insurers overcome their short-sighted view of his product. Mech argues that it would be more cost-effective for insurers to pay for the headsets, even in part, than to cover more expensive surgical procedures that may restore some sight to the visually impaired.

NEW GLASSES

The latest version of ESight’s technology, built with investments of $32 million over the past decade, is a gadget that vaguely resembles the visor worn by the blind “Star Trek” character Geordi La Forge , played by LeVar Burton.

The third-generation model lets wearers magnify the video feed up to 24 times, compared to just 14 times in earlier models. There’s a hand control for adjusting brightness and contrast. The new glasses also come with a more powerful high-definition camera.

ESight believes that about 200 million people worldwide with visual acuity of 20/70 to 20/1200 could be potential candidates for its glasses. That number includes people with a variety of disabling eye conditions such as macular degeneration, diabetic retinopathy, ocular albinism, Stargardt’s disease, or, like Regan, optic nerve hypoplasia.

So far, though, the company has sold only about 1,000 headsets, despite the testimonials of wearers who’ve become true believers.

Take, for instance, Yvonne Felix, an artist who now works as an advocate for eSight after seeing the previously indistinguishable faces of her husband and two sons for the first time via its glasses. Others, ranging from kids to senior citizens, have worn the gadgets to golf, watch football or just perform daily tasks such as reading nutrition labels.

EYING THE COMPETITION

ESight isn’t the only company focused on helping the legally blind. Other companies working on high-tech glasses and related tools include Aira , Orcam , ThirdEye , NuEyes and Microsoft .

But most of them are doing something very different. While their approaches also involve cameras attached to glasses, they don’t magnify live video. Instead, they take still images, analyze them with image recognition software and then generate an automated voice that describes what the wearer is looking at — anything from a child to words written on a page.

Samuel Markowitz, a University of Toronto professor of ophthalmology, says that eSight’s glasses are the most versatile option for the legally blind currently available, as they can improve vision at near and far distances, plus everything in between.

Markowitz is one of the researchers from five universities and the Center for Retina and Macular Disease that recently completed a clinical trial of eSight’s second-generation glasses. Although the results won’t be released until later this year, Markowitz said the trials found little risk to the glasses. The biggest hazard, he said, is the possibility of tripping and falling while walking with the glasses covering the eyes.

The device “is meant to be used while in a stationary situation, either sitting or standing, for looking around at the environment,” Markowitz said.

http://www.zdnet.com/article/zte-unveils-gigabit-phone-partners-with-intel-for-5g-it-baseband-unit/

ZTE unveils ‘Gigabit’ phone, partners with Intel for 5G IT baseband unit

ZTE has already made a slew of announcements at MWC, including its new ‘5uper Generation’ smartphone boasting download speeds of up to 1Gbps.

zte-gigabit-phone.jpg
ZTE Gigabit Phone

Image: Supplied

Chinese smartphone manufacturer ZTE unveiled its new Gigabit Phone at Mobile World Congress (MWC) in Barcelona on Sunday, boasting download speeds of up to 1Gbps.

The ZTE Gigabit Phone is powered by the Qualcomm Snapdragon 835 mobile platform with an integrated Snapdragon X16 LTE modem, and utilises a combination of carrier aggregation, 4×4 MIMO antenna technology, and 256-QAM modulation to achieve LTE download speeds that ZTE said are up to 10 times faster than first-generation LTE devices.

The smartphone also uses ZTE’s independently developed Pre5G Giga+ MBB solution and smart devices, which ZTE said makes it three times as powerful at improving data processing capability over the current network.

ZTE touted the new device as improving users’ lifestyles by allowing for 360 degree panoramic virtual reality video, instant cloud storage, entertainment upgrades, and fast cache of ultra Hi-Fi music and movies, as well as instant APP, which the company said removes the need for download or installation of applications.

The new smartphone announcement comes after ZTE terminated its Kickstarter campaign for the Hawkeye phone — originally called Project CSX — which was expected to comprise of features crowdsourced from the community.

The Hawkeye phone promised a self-adhering case and eye-tracking technology to enable hands-free experiences. While ZTE previously said it has not ruled out the Hawkeye completely, the crowdfunding campaign ended after it raised only $36,245 of its $500,000 funding goal.

Also at MWC, ZTE announced the launch a 5G IT baseband unit (BBU), based on Intel architecture.

By utilising advanced software defined networking/network function virtualisation (SDN/NFV) technology, the modular IT BBU is compatible with 2G/3G/4G/Pre5G and supports cloud-radio access networks (C-RAN), distributed-RAN (D-RAN), and 5G central and distributed units (CU/DU), ZTE said.

“Intel technologies power the cloud and billions of smart, connected computing devices, so we are very pleased to be partners in the field of 5G and our deep cooperation will support our two companies’ long-term development,” said Jianguo Zhang, senior vice president of ZTE.

“As a leading communication equipment and solutions provider, 5G is part of ZTE’s core strategy and it is dedicated to the R&D of virtualisation technologies. The cooperation between ZTE and Intel will deliver a simpler, more flexible, and open network to telecom operators, and bring bigger value to users.”

Also at MWC, ZTE revealed plans to release a full range of 5G mmWave and Sub6GHz pre-commercial base stations, which the company said provides support for 3rd Generation Partnership Projects (3GPP), 5G new radio (NR), new air interfaces, and mainstream 5G frequency bands in the industry.

The data throughput on the 5G Sub6GHz base stations can reach 10 Gbps, ZTE said, with China Mobile successfully completing a demonstration of the pre-commercial base station.

ZTE said it also completed 5G single-point technology and prototype verification and has now entered the 5G solution verification and product R&D phase.

“ZTE will continue its innovation in the 5G field to meet the product and service needs of customers. ZTE will occupy a place in the world’s first 5G commercial market and lay a solid foundation for the future Internet of Things (IoT),” said Zhang.

The International Telecommunication Union (ITU) and 3GPP updated their 5G characteristics and requirements standards at the weekend, with speeds, spectrum, and latency all due to be decided later this year.

Under the current International Mobile Telecommunications (IMT) 2020 technical performance requirements, 5G networks must be capable of downlink peak data rates of 20Gbps; uplink peak data rates of 10Gbps; downlink user-experienced data rates of 100Mbps; uplink user-experienced data rates of 50Mbps; 4ms latency for enhanced mobile broadband; and 1ms latency for ultra-reliable low-latency communications devices.

5G networks must also enable mobility maximum speeds of between 0km/h and 10km/h for pedestrians, 10km/h to 20km/h for vehicles, and 120km/h to 500km/h for high-speed vehicles; a connection density of 1 million devices per square kilometre; downlink peak spectrum efficiency of 30 bits/Hz; uplink peak spectrum efficiency of 15 bits/Hz; and area traffic capacity of 10Mbps per square metre.

Other parameters being decided include energy efficiency, reliability, control plane latency, and mobility interruption time.

https://www.theregister.co.uk/2017/02/27/google_project_zero_reports_flaw_in_ie_edge/

Google’s Project Zero reveals another Microsoft flaw

Edge, IE, can find themselves running unexpected code if cooked by a malicious site

Google’s Project Zero has revealed a bug in Microsoft’s Internet Explorer and Edge browsers.

First turned up on November 25, the bug offers evildoers a technique that would let a malicious web site crash a visitor’s browser as the main course, with code execution as the dessert.

Detailed here, the bug works by attacking a type confusion in HandleColumnBreak
OnColumnSpanningElement
.

A 17-line proof-of-concept crashes that process, with a focus on two variables rcx and rax.

“An attacker can affect rax by modifying table properties such as border-spacing and the width of the first th element,” Project Zero’s post states – so the crafted Web page just needs to point rax to memory they control.

The issue was published at the end of Project Zero’s 90-day disclosure deadline, and it remains unpatched.

Earlier this month, Redmond delayed February’s Patch Tuesday, but last week it managed to emit a bunch of fixes for Adobe Flash. ®

https://www.washingtonpost.com/local/was-i-wrong-to-think-an-apple-watch-might-improve-my-life/2017/02/26/aecaa914-fbd0-11e6-9845-576c69081518_story.html?utm_term=.11cd1c878c36

Was I wrong to think an Apple Watch might improve my life?

Columnist February 26 at 4:58 PM
It took me a while to persuade my new Apple Watch to stop calling me Ruth.

Whenever the watch thought I was in need of positive reinforcement, the words “Well done, Ruth!” would light up on its shiny, touch-sensitive face.

“Well done, Ruth!” it would announce when I’d walked 10,000 steps or stood up at my desk or breathed.

Ruth happens to be the name of My Lovely Wife and, in fact, the smartwatch was a Christmas gift from her.

I’m quite fond of analog things — cameras, drum sets, books — and a mechanical wristwatch is a great marvel of human engineering, all those gears and springs. But I’d been missing the fitness tracker that I wore, if ever so briefly, and I mentioned to Ruth that maybe an Apple Watch could serve that purpose, and more. With it, I would step my way to health, while also knowing the time, temperature and location of the nearest Starbucks.

And then the watch kept calling me Ruth.

Maybe, I thought, the watch preferred Ruth, wished it was on her wrist instead of mine. Maybe when I wasn’t wearing it — when I took it off at night to recharge (for battery life is the great Achilles’ heel of Apple’s anemic products) — Ruth was slipping it on, allowing the watch’s built-in accelerometer and gyroscope to become accustomed to her gait.

Was the watch like a cheating spouse, accidentally blurting the name of its lover? “Well done, Ruth! Er, John!”

The watch had other quirks, too. Every now and then the image on the display would get huge. The cuff of my sleeve would brush the face of the watch and a portion of the display would inflate to gargantuan size. Rather than show an entire analog clock face, with a delicate red second hand sweeping around majestically, it would show just one corner of a huge and queasily pixelated 4.

I would jab at the screen, then pinch my thumb and forefinger on it as if I was trying to pick up a dropped needle. Sometimes that would work and the old display would snap back. Other times I would have to push the digital crown of the watch repeatedly, cycling through various functions to return to the familiar display.

It turned out this sudden, unintended magnification was happening because the zoom function was enabled in the watch’s accessibility settings. I think the watch thought I was visually impaired.

Or maybe it wanted me to think I was visually impaired. My own watch was gaslighting me.

I disabled the zoom function then set about convincing the watch that it belonged to me, that whatever walks it was going on were courtesy of my legs, that whatever blood was coursing beneath its little light-sensitive, pulse-determining diodes was mine.

“Just what do you think you are doing, Ruth?” the watch said as I began poking around in its brain.

“I’m fixing you,” I said.

And I did. The problem was that the iPhone contacts list to which I had originally synced the watch had me listed as . . . Ruth. The conspiracy went deeper than I thought.

But once I’d corrected that I finally had the watch as Apple intended.

It is cool, so colorful and customizable. The display springs to life when I lift my wrist, then reverts to black when I’m not looking at the watch. I can read email on it and get directions, though of course I can do that with my phone, too, which is never more than three feet from my watch.

The watch does seem sincerely concerned for my well-being. It encourages me to breathe. Sure, I’ve been doing that on my own for years now, but at regular intervals the watch vibrates and, like a yoga instructor, exhorts, “Be still and bring your attention to your breath.”

It displays a little flower that blooms rhythmically, guiding me in my inhalations. When I’ve finished, it says, “Well done!”

The watch is worried about my sedentary lifestyle. “Time to stand!” it says hourly. I stop whatever I’m doing, rise and walk around the newsroom. “You did it!” the watch coos. “You’ve earned another hour toward your Stand goal.”

Who knew I even had a Stand goal? I didn’t, but my watch did.