https://www.washingtonpost.com/news/wonk/wp/2016/05/18/googles-new-artificial-intelligence-cant-understand-these-sentences-can-you/

‘No one else in sight’: Google’s AI pens poetry after reading romance novels

In a bid to train its artificial intelligence engine to speak more naturally, Google trained its model to read thousands of novels, including romance books. Now, the AI speaks in strange, “rather dramatic” sentences, reminiscent of the best (or worst) modern poetry.

In an unpublished paper, presented at an international conference on learning in May, Google Brain researchers and contributing academics detailed how they’re training their AI to speak like a human.

Scientists poured thousands of novels, including romance and fantasy books, into the AI’s neural network model, which is meant to mimic the human brain.

Then, the researchers gave the AI two different sentences from the books, and asked it to generate intermediate sentences that could create a logical progression from the first sentence to the last.

Here are some examples of what the AI created. The first and last sentences in bold were provided to the engine, and the sentences in between were generated by the AI.

Google AI

The researchers note that the intermediate sentences are “almost always grammatical, and often contain consistent topic, vocabulary and syntactic information in local neighbourhoods as they interpolate between the endpoint sentences.”

“Because the model is trained on fiction, including romance novels, the topics are often rather dramatic,” the researchers add.

http://www.valuewalk.com/2016/05/top-5-free-apps-for-the-iphone-that-where-once-paid/

Top 5 Free Apps For The iPhone That Where Once Paid

If you’re an iPhone user and are tired of having to pay for the apps you want, there is no better news than afree app. We’ll today I’ve searched for as many iOS Apps that I could find for the iPhone and discoveredaround ten different free apps. Below I’m going to tell you all about each one and give you a link so that youcan go and download it.

Free apps for the iPhone

Top 5 Free Apps For The iPhone

Once you download the free app or even apps of your choice from the list below, you will be able to use themforever! Now if you’re a developer and you want to bring some attention to your work, why not offer thereaders here at ValueWalk a free app. You never know I may eventually be asked to review it for in full!

So here we go, take your pic from as many of the free apps below as you like.

Makaninm – Multi -Touch

Makanim iPhone free apps for iPhone

This first on my list of free apps is Makaninm, it is an art and graphics generation app, which according to itsdeveloper will provide you with pure visual stimulation! Featuring a high-level of customization it is certainlyan extremely worthy color and image manipulation app worth trying.

You can take a more detailed look and download it for your iPhone here.

Probably not the most glamorous of the free apps, but it does serve a purpose. If you download and installthis app you will be able to simply tap and call anyone in your contacts list. And in fact this One Touch Dialapp offers much more than a faster way to call, as it will also allow you to send text messages, make Skypecalls, post to Instagram and connect to your social media accounts all without ever having to leave the app.

If this app sounds interesting you can take a look at it some more here.

Next on this list of free apps is Place finder. It is a navigation and weather app that offers somethingdifferent in the form of searches. So if you want to know where your nearest Hotel, School, Free parkingspots, Doctors, Restaurants are near by, the app will do this for you as well as direct you to whichever oneyou choose to go to.

You can download the app from here.

LensLight was added to the list of free apps available for your iPhone today, exactly how long it will stay likethat I don’t know, so you should probably take a look soon! This app is considered by by many to be one ofthe ultimate apps on iOS for adding gorgeous looking lighting effects to pictures. Its list of effects is quiteextensive, so here are, but a few of the interactive lights available: Bokeh, Lensflares, Light Leaks, Spotlights,Rainbow effects and more.

This app is available from the App Store here.

Now this game is only free for a limited time, exactly how much time I don’t know, so why not take advantageof a free game while you can.

This is an immersive version of the ancient game of Mahjong, in which you pair up tiles to quickly dismantlehundreds of different layouts and gather pearls so that you can buy special abilities. As well as make use ofa range of special power-ups, earn rewards and generally enjoy what is an addictive, but beautiful game.

If playing this engrossing game interests you, download it from the App Store here.

Final Thoughts

So now that you’ve seen what’s available for today go ahead and make your choice. Choose one of thesefree apps and give it a try, after all, what have you got to lose they’re all free. Just uninstall if you don’t likeone and move to the next.

http://www.afr.com/technology/technology-companies/google/googles-artificial-intelligence-in-new-allo-app-will-answer-all-your-questions-20160518-goyh9s

Google’s artificial intelligence in new Allo app will answer all your questions

Google is turning to artificial intelligence to make sure people keep using its search engine, even if they’re not spending as much time on the Web and personal computers.

The Alphabet division unveiled a new mobile messaging application on Wednesday called Allo containing – a digital personal assistant, based on AI technology that powers other Google services like Inbox.

At its I/O developer conference near its Silicon Valley headquarters, the company also showed off a voice-based search device called Google Home that uses the same assistant technology to answer questions when people are in their houses, a potentially potent rival to Amazon.com’s popular Echo gadget.

Chief executive Sundar Pichai said the goal was to develop an “ongoing two-way dialogue with Google” and build billions of people their own “individual Google.” The CEO sees the Google digital assistant as an “ambient experience that extends across devices.”

Google became one of the world’s most valuable companies by making a search engine that sucks in billions of queries people type into web browsers on PCs and phones. Google sells ads based those indications of intent and desire. But that search advertising money machine is at risk as computing evolves and gives people new ways to find what they want — and new avenues for competing companies to satisfy those wants.

The new products unveiled on Wednesday — and future ones using the same Google AI technology — give the company a chance to keep its search engine relevant in an era of new, connected devices.

CONVERSATIONAL INTERFACE

With the future of search — and intent-based advertising — up for grabs again, AI has become a big strategic area of investment for many technology companies. The bet is that whoever makes the most engaging and useful digital personal assistant, also known as the conversational interface, will control the layer between a person and their digital life, and collect the most revenue and profit from being that privileged broker.

Google has been working on artificial intelligence — technology that lets computers teach themselves about the world — for more than 15 years. The company hopes this expertise can help it build conversational computing products that beat the competition.

“It’s absolutely strategic,” said Scott Huffman, a vice president of engineering for search. “If you think about this simple idea of having a conversation, that is the interface that all the people around you have.”

Google hopes that if it makes it easier for people to access its services, they’ll use them more. That’s a pattern it has seen with other technology initiatives. “Every time we’ve had an improvement in voice recognition we see a corresponding jump in usage,” Huffman said.

Conversational computing is a crowded field. Amazon’s Echo, which has been a bestseller for the company, lets people use their voice rather than type to search for things and order them from Amazon’s online store and play music through Amazon-owned services, cutting out Google. Microsoft ‘s Cortana AI assistant embedded in Windows 10 lets people ask questions that the company answers via its Bing search engine. Facebook’s digital assistant, M, uses some AI to let people get answers to questions and perform actions such as ordering flowers, and its recently unveiled chatbot platform gives companies a way to chat directly with consumers, no Googling required.

ALLO GOOGLE

Over time, Google said it will develop other products and services using the same digital assistant, which will stay with people across devices, and remember their habits, Huffman said. That will let Google’s search and other services follow people from smartphones and smart watches into their car and homes. Google plans to add more AI capabilities to its assistant, some of which won’t necessarily appear in its main search engine, Huffman said.

The company’s Allo messaging app will come out this summer. It will suggest responses to messages by reading and understanding people’s text conversations. A contact named “@google” can be summoned by users to provide AI-powered services, like finding restaurants and booking tables, or searching for movies.

The AI in Allo is based on technology already deployed in Google’s Inbox program, which reads through e-mails and suggests appropriate replies. It also understands images sent in text messages, using its AI to look at what is in the picture and suggest its own comments.

The more people use Allo the better the AI will get. The system works by converting words and images into sequences of numbers explicable to Google’s machine intelligence, letting it develop intuition so it can guess the word “dog” is semantically similar to puppy, or that the appropriate response to a picture of someone skydiving is “brave.”

In a recent demo, Erik Kay, a Google director of engineering for communications products, took a photo of colleague Amit Fulay, who posed with an exaggerated grin. Kay texted the photo to Amit using Allo, and on Amit’s phone the AI studied the photo, noticed the grin and came up with the response “sunny smile :-)”. These kinds of automated responses are meant to give people a more satisfying conversation, but has the additional benefit of generating more data for Google to use to further develop its AI.

GOOGLE HOME

After Allo, the company plans to release the Google Home device that people can speak to. The gadget will play music, communicate with other Google devices, and answer questions using Google’s AI assistant and its search engine, along with managing other Google products like Calendar and Gmail. People will summon it by using the same call out — “OK, Google” — that is used in other existing Google mobile apps. “Hey, Google” will also work.

“We don’t necessarily have to be first every time, that’s not actually our goal, but we want to be the most scalable solution,” said Rishi Chandra, a vice president of product management for Google, noting the product is “not an assistant that does three things but it can really do anything.”

Over time, Google said its AI will gain new capabilities. The company plans to give it a more flexible memory, so when people have a conversation where a friend references their home address, it will learn to associate that location with them. A technology called “Expander” will help the technology work in multiple languages, with insights gleaned from AI deployed in one language feeding the intelligence of the software in another tongue.

But it will take time for the AI to be able to anticipate and respond to every request, said Huffman. “It’s nice job security for us because it’s going to take it a while to let it do anything people want to do.”

Bloomberg

 

http://unews.utah.edu/language-myth-buster/

Credit: Cambridge University Press

Download Full-Res Image

Published by Cambridge University Press, Kaplan uses research, case studies, examples and facts toexplain why popular and widely believed myths about language just aren’t true.

Myth: Women talk more than men.

“This is often framed as a bad thing — that women talk too much,” said Kaplan. “Globally andhistorically, the common view has been that women’s ways of speaking — whatever we think they are— are inferior to men’s.”

Kaplan explains that researchers have studied men and women in many settings and the mostcommon finding is that men talk more. But it also depends on the situation: how well people knoweach other, what the task is, what kind of power people have, and so on. The best study of overalltalkativeness, according to Kaplan, had 400 college students wear recording devices for a few days andthere was no difference between the average words per day for men and women.

Myth: Texting makes you illiterate because of all the abbreviations used.

In her book, Kaplan discusses three reasons why this myth isn’t true. First, the types of abbreviationsused in texting aren’t unique. Acronyms like “LOL” are informal, but they aren’t inherently bad – peopleuse abbreviations like MIT, NCAA and ASAP all the time. Second, most text messages aren’tabbreviated very much at all. Many people take pride in not using abbreviations. Third, researchers have compared people who text vs. peoplewho don’t and though the results are somewhat mixed, there’s just no good evidence that texting lowers your IQ or harms literacy.

Myth: Sign language is pantomime.

Kaplan clarifies that ASL is a distinct language with its own grammar and vocabulary. Signers process sign language with the same regions of thebrain that hearing people use for spoken language.

“Children learning to sign go through the same stages as those learning to speak and deaf children babble with their hands instead of theirmouths. Some very young hearing children confuse ‘you’ and ‘me’ for a while, and some very young deaf children do the same — even though‘you’ and ‘me’ in ASL involves pointing to yourself or the other person. This is evidence that the pointing sign is a pronoun, not ordinarypointing.”

In her book, Kaplan debunks these and many more common misconceptions about language and challenges readers to rethink what theycurrently know about communication.

http://www.kurzweilai.net/machine-learning-outperforms-physicists-in-experiment

Machine learning outperforms physicists in experiment

May 16, 2016

The experiment, featuring the small red glow of a BEC trapped in infrared laser beams (credit: Stuart Hay, ANU)

Australian physicists have used an online optimization process based on machine learning to produce effective Bose-Einstein condensates (BECs) in a fraction of the time it would normally take the researchers.

A BEC is a state of matter of a dilute gas of atoms trapped in a laser beam and cooled to temperatures just above absolute zero. BECs are extremely sensitive to external disturbances, which makes them ideal for research into quantum phenomena or for making very precise measurements such as tiny changes in the Earth’s magnetic field or gravity.

The experiment, developed by physicists from ANU, University of Adelaide and UNSW ADFA, demonstrated that “machine-learning online optimization” can discover optimized condensation methods “with less experiments than a competing optimization method and provide insight into which parameters are important in achieving condensation,” the physicists explain in an open-access paper in the Nature group journal Scientific Reports.

Faster, cheaper than a physicist

Optical dipole trap used in the experiment, showing the three laser beams and the condensate (red-yellow oval in blue square) (credit: P. B. Wigley et al./Scientific Reports)

The team cooled the gas to around 5 microkelvin. To further cool down the trapped gas (containing about 40 million rubidium atoms) to on the order of nanokelvin*, they then handed control of the three laser beams** over to the machine-learning program.

The physicists were surprised by the clever methods the system came up with to create a BEC, like changing one laser’s power up and down, and compensating with another laser.

“I didn’t expect the machine could learn to do the experiment itself, from scratch, in under an hour,” said co-lead researcher Paul Wigley from ANU Research School of Physics and Engineering. “A simple computer program would have taken longer than the age of the universe to run through all the combinations and work this out.”

Wigley suggested that one could make a working device to measure gravity that you could take in the back of a car, and the AI would automatically recalibrate and fix itself.

“It’s cheaper than taking a physicist everywhere with you,” he said.

* Billionth of a degree above absolute zero — where a phase transition occurs, and a macroscopic number of atoms start to occupy the same quantum state, called Bose-Einstein condensation.

** The 1064 nm beam is controlled by varying the current to the laser, while the 1090 nm beam is controlled using the current and a waveplate rotation stage combined with a polarizing beamsplitter to provide additional power attenuation while maintaining mode stability.


Abstract of Fast machine-learning online optimization of ultra-cold-atom experiments

We apply an online optimization process based on machine learning to the production of Bose-Einstein condensates (BEC). BEC is typically created with an exponential evaporation ramp that is optimal for ergodic dynamics with two-body s-wave interactions and no other loss rates, but likely sub-optimal for real experiments. Through repeated machine-controlled scientific experimentation and observations our ‘learner’ discovers an optimal evaporation ramp for BEC production. In contrast to previous work, our learner uses a Gaussian process to develop a statistical model of the relationship between the parameters it controls and the quality of the BEC produced. We demonstrate that the Gaussian process machine learner is able to discover a ramp that produces high quality BECs in 10 times fewer iterations than a previously used online optimization technique. Furthermore, we show the internal model developed can be used to determine which parameters are essential in BEC creation and which are unimportant, providing insight into the optimization process of the system.

http://www.kurzweilai.net/rapid-eye-movement-sleep-keystone-of-memory-formation

Rapid eye movement sleep (dreaming) shown necessary for memory formation

May 16, 2016

Inhibition of  media septum GABA neurons during rapid eye movement (REM) sleep reduces theta rhythm (a characteristic of REM sleep). Schematic of the in vivo recording configuration: an optic fiber delivered orange laser light to the media septum part of the brain, allowing for optogenetic inhibition of media septum GABA neurons while recording the local field potential signal from electrodes implanted in hippocampus area CA1. (credit: Richard Boyce et al./Science)

A study published in the journal Science by researchers at the Douglas Mental Health University Institute at McGill University and the University of Bern provides the first evidence that rapid eye movement (REM) sleep — the phase where dreams appear — is directly involved in memory formation (at least in mice).

“We already knew that newly acquired information is stored into different types of memories, spatial or emotional, before being consolidated or integrated,” says Sylvain Williams, a researcher and professor of psychiatry at McGill*. “How the brain performs this process has remained unclear until now. We were able to prove for the first time that REM sleep (dreaming) is indeed critical for normal spatial memory formation in mice,” said Williams.

Dream quest

Hundreds of previous studies have tried unsuccessfully to isolate neural activity during REM sleep using traditional experimental methods. In this new study, the researchers instead used optogenetics, which enables scientists to precisely target a population of neurons and control its activity by light.

“We chose to target [GABA neurons in the media septum] that regulate the activity of the hippocampus, a structure that is critical for memory formation during wakefulness and is known as the ‘GPS system’ of the brain,” Williams says.

To test the long-term spatial memory of mice, the scientists trained the rodents to spot a new object placed in a controlled environment where two objects of similar shape and volume stand. Spontaneously, mice spend more time exploring a novel object than a familiar one, showing their use of learning and recall.

Shining orange laser light on media septum (MS) GABA neurons during REM sleep reduces frequency and power (purple section) of neuron signals in dorsal CA1 area of hippocampus (credit: Richard Boyce et al./Science)

When these mice were in REM sleep, however, the researchers used light pulses to turn off their memory-associated neurons to determine if it affects their memory consolidation. The next day, the same rodents did not succeed the spatial memory task learned on the previous day. Compared to the control group, their memory seemed erased, or at least impaired.

“Silencing the same neurons for similar durations outside of REM episodes had no effect on memory. This indicates that neuronal activity specifically during REM sleep is required for normal memory consolidation,” says the study’s lead author, Richard Boyce, a PhD student.

Implications for brain disease

REM sleep is understood to be a critical component of sleep in all mammals, including humans. Poor sleep quality is increasingly associated with the onset of various brain disorders such as Alzheimer’s and Parkinson’s disease.

In particular, REM sleep is often significantly perturbed in Alzheimer’s diseases (AD), and results from this study suggest that disruption of REM sleep may contribute directly to memory impairments observed in AD, the researchers say.

This work was partly funded by the Canadian Institutes of Health Research (CIHR), the Natural Science and Engineering Research Council of Canada (NSERC), a postdoctoral fellowship from Fonds de la recherche en Santé du Québec (FRSQ) and an Alexander Graham Bell Canada Graduate scholarship (NSERC).

* Williams’ team is also part of the CIUSSS de l’Ouest-de-l’Île-de-Montréal research network. Williams co-authored the study with Antoine Adamantidis, a researcher at the University of Bern’s Department of Clinical Research and at the Sleep Wake Epilepsy Center of the Bern University Hospital.


Abstract of Causal evidence for the role of REM sleep theta rhythm in contextual memory consolidation

Rapid eye movement sleep (REMS) has been linked with spatial and emotional memory consolidation. However, establishing direct causality between neural activity during REMS and memory consolidation has proven difficult because of the transient nature of REMS and significant caveats associated with REMS deprivation techniques. In mice, we optogenetically silenced medial septum γ-aminobutyric acid–releasing (MSGABA) neurons, allowing for temporally precise attenuation of the memory-associated theta rhythm during REMS without disturbing sleeping behavior. REMS-specific optogenetic silencing of MSGABA neurons selectively during a REMS critical window after learning erased subsequent novel object place recognition and impaired fear-conditioned contextual memory. Silencing MSGABA neurons for similar durations outside REMS episodes had no effect on memory. These results demonstrate that MSGABA neuronal activity specifically during REMS is required for normal memory consolidation.

http://www.kurzweilai.net/ibm-scientists-achieve-storage-memory-breakthrough-with-pcm

IBM scientists achieve storage-memory breakthrough with PCM

PCM combines speed of DRAM and non-volatility of flash, providing fast, easy storage for the exponential growth of data from mobile devices, the Internet of Things, and cloud computing
May 16, 2016

For the first time, scientists at IBM Research have demonstrated reliably storing 3 bits of data per cell using a relatively new memory technology known as phase-change memory (PCM). In this photo, the experimental multi-bit PCM chip used by IBM scientists is connected to a standard integrated circuit board. The chip consists of a 2 × 2 Mcell array with a 4- bank interleaved architecture. The memory array size is 2 × 1000 μm × 800 μm. The PCM cells are based on doped-chalcogenide alloy and were integrated into the prototype chip, serving as a characterization vehicle in 90 nm CMOS baseline technology. (credit: IBM Research)

Scientists at IBM Research have demonstrated — for the first time (today, May 17), at the IEEE International Memory Workshop in Paris — reliably storing 3 bits of data per cell in a 64k-cell array in a memory chip*, using a relatively new memory technology known as phase-change memory (PCM). Previously, scientists at IBM and elsewhere successfully demonstrated the ability to store only 1 bit per cell in PCM.

The current memory landscape includes DRAM, hard disk drives, and flash. But in the last several years, PCM has attracted the industry’s attention as a potential universal memory technology based on its combination of read/write speed, endurance, non-volatility and density. For example, PCM doesn’t lose data when powered off, unlike DRAM, and the technology can endure at least 10 million write cycles, compared to an average flash USB stick, which tops out at 3,000 write cycles.

IBM suggests this research breakthrough provides fast and easy storage to capture the exponential growth of data from mobile devices and the Internet of Things.

Scientists have long been searching for a universal, non-volatile memory technology with performance far superior to flash — today’s most ubiquitous non-volatile memory technology. The benefits of such a memory technology would allow computers and servers to boot instantaneously and would significantly enhance the overall performance of IT systems. A promising contender is PCM, which can write and retrieve data 100 times faster than Flash and enable high storage capacities, and like flash, not lose data when the power is turned off. Unlike flash, PCM is also very durable and can endure at least 10 million write cycles, compared to current enterprise-class flash at 30,000 cycles or consumer-class flash at 3,000 cycles. While 3,000 cycles will outlive many consumer devices, 30,000 cycles are orders of magnitude too low to be suitable for enterprise applications. (credit: IBM Research)

IBM scientists envision standalone PCM as well as hybrid applications that combine PCM and flash storage, with PCM as an extremely fast cache. For example, a mobile phone’s operating system could be stored in PCM, enabling the phone to launch in a few seconds. In the enterprise space, entire databases could be stored in PCM for blazing fast query processing in time-critical online applications, such as financial transactions.

Machine learning algorithms using large datasets will also see a speed boost by reducing the latency overhead when reading the data between iterations.

How PCM Works: answering the grand challenge of combining properties of DRAM and flash

PCM materials exhibit two stable states: the amorphous (without a clearly defined structure) and crystalline (with structure) phases (low and high electrical conductivity, respectively).

To store a “0″ or a “1″ bit on a PCM cell, a high or medium electrical current is applied to the material. A “0″ can be programmed to be written in the amorphous phase and a “1″ in the crystalline phase, or vice versa. Then to read the bit back, a low voltage is applied. This is how re-writable (but slower) Blu-ray discs** store videos.

Phase-change memory (PCM) is one of the most promising candidates for next-generation non-volatile memory technology. The cross-sectional tunneling electron microscopy (TEM) image of a mushroom-type PCM cell is shown in this photo. The cell consists of a layer of phase-change material, such as germanium antimony telluride (GST), sandwiched between a bottom and a top electrode. In this architecture, the bottom electrode has a radius (denoted as rE ) of approx. 15 nm and is fabricated by sub-lithographic means. The top electrode has a radius in excess of 100 nm and the thickness of the phase change layer is approx. 100 nm. A transistor or a diode is typically employed as the access device. (credit: IBM Research — Zurich)

“Phase change memory is the first instantiation of a universal memory with properties of both DRAM and flash, thus answering one of the grand challenges of our industry,” said Haris Pozidis, PhD., an author of the workshop paper and the manager of non-volatile memory research at IBM Research–Zurich. “Reaching 3 bits per cell is a significant milestone because at this density, the cost of PCM will be significantly less than DRAM and closer to flash.”

To achieve multi-bit storage, IBM scientists have developed two innovative enabling technologies: a set of drift-immune cell-state metrics and drift-tolerant coding and detection schemes***. “Combined, these advancements address the key challenges of multi-bit PCM, including drift, variability, temperature sensitivity, and endurance cycling,” said IBM FellowEvangelos Eleftheriou, PhD.

IBM scientists have also demonstrated, for the first time, phase-change memory attached to POWER8-based servers.

* At elevated temperatures and after 1 million endurance cycles.

*** “Blu-ray Disc” is owned by the Blu-ray Disc Association (BDA)

*** More specifically, the new cell-state metrics measure a physical property of the PCM cell that remains stable over time, and are thus insensitive to drift, which affects the stability of the cell’s electrical conductivity with time. To provide additional robustness of the stored data in a cell over ambient temperature fluctuations, a novel coding and detection scheme is employed. This scheme adaptively modifies the level thresholds that are used to detect the cell’s stored data so that they follow variations due to temperature change. As a result, the cell state can be read reliably over long time periods after the memory is programmed, thus offering non-volatility.

The experimental multi-bit PCM chip used by IBM scientists is connected to a standard integrated circuit board. The chip consists of a 2 × 2 Mcell array with a four-bank interleaved architecture. The memory array size is 2 × 1000 μm × 800 μm. The PCM cells are based on doped-chalcogenide alloy and were integrated into the prototype chip serving as a characterization vehicle in 90 nm CMOS baseline technology.


IBM Research | IBM Scientists Achieve Storage Memory Breakthrough


Abstract of Multilevel-Cell Phase Change Memory: A Viable Technology

In order for any non-volatile memory (NVM) to be considered a viable technology, its reliability should be verified at the array level. In particular, properties such as high endurance and at least moderate data retention are considered essential. Phase-change memory (PCM) is one such NVM technology that possesses highly desirable features and has reached an advanced level of maturity through intensive research and development in the past decade. Multilevel-cell (MLC) capability, i.e., storage of two bits per cell or more, is not only desirable as it reduces the effective cost per storage capacity, but a necessary feature for the competitiveness of PCM against the incumbent technologies, namely DRAM and Flash memory. MLC storage in PCM, however, is seriously challenged by phenomena such as cell variability, intrinsic noise, and resistance drift. We present a collection of advanced circuit-level solutions to the above challenges, and demonstrate the viability of MLC PCM at the array level. Notably, we demonstrate reliable storage and moderate data retention of 2 bits/cell PCM, on a 64 k cell array, at elevated temperatures and after 1 million SET/RESET endurance cycles. Under similar operating conditions, we also show feasibility of 3 bits/cell PCM, for the first time ever.

references:

http://www.nytimes.com/2016/05/05/business/tesla-says-it-will-sharply-ramp-up-production-of-model-3.html?em_pos=small&emc=edit_ws_20160517&nl=wheels&nl_art=11&nlid=71705833&ref=headline&te=1&_r=1

Tesla Says It Will Sharply Ramp Up Production of Model 3

DETROIT — Tesla Motors said on Wednesday that it was confident it couldaccelerate production to meet high demand for its forthcoming Model 3electric vehicle, despite the departure of two top manufacturing executives.

The carmaker’s chief executive, Elon Musk, said that Tesla expected toproduce a total of 500,000 vehicles by 2018, two years sooner thanpreviously announced.

The goal represents a huge leap from Tesla’s production of about 15,000vehicles in the first three months of this year. But Mr. Musk expressedconfidence that the company could meet the ambitious target, and beginfilling more than 325,000 orders for the Model 3 by late next year.

“I think we have done a good job on the design and technology of ourproducts,” Mr. Musk said in a conference call with analysts. “The big thingwe need to achieve in the future is also to be the leader in manufacturing.”

The expansion will take place without the help of Greg Reichow, Tesla’s vicepresident for production, who is taking a leave of absence after five yearswith the company. Also leaving the company is John Ensign, who had beenvice president for manufacturing since 2014.

Mr. Musk, who said he was recruiting new manufacturing executives, gaveno reason for the unexpected departures.

“We’re confident that with the strength of the team, high-qualitymanufacturing at Tesla will continue,” he said.

The company said Mr. Reichow would assist in the transition.

“My belief in Tesla’s ability to successfully deliver great cars and inspire theworld to drive electric remains as strong as ever,” Mr. Reichow said in astatement released by the company.

Tesla has had to confront quality concerns about its Model X sport utilityvehicle, including a recall last month of all 2,700 vehicles to repair a latchthat prevents rear seats from folding forward in a collision.

Mr. Musk did not share details of how Tesla would increase vehicleproduction, but he said in a letter to shareholders that the move “will likelyrequire some additional capital” to finance the expansion.

The company is also building a major battery plant for its vehicles inNevada, and expects to produce its first battery cells there by the end of thisyear.

An expansion is necessary to meet the extraordinary demand for the Model3, which will be the company’s first electric vehicle aimed at mass-marketconsumers.

With a price of about $35,000 and a battery range of more than 200 miles,the Model 3 has generated enormous interest from potential customers.Tesla has received $1,000 deposits for more than 325,000 vehicles.

One industry analyst said converting all the orders into sales could be adifficult task, particularly if new electric cars from other automakers hit themarket before the Model 3.

“While pre-orders suggest demand could be there, with recent executivedepartures and more competitive offerings, this may be tough to achieve,”Efraim Levy at S&P Global Market Intelligence said in a research note.

Mr. Musk reiterated earlier forecasts that Tesla would sell up to 90,000Model X S.U.V.s and Model S luxury sedans this year.

For the first quarter, Tesla reported wider losses, but larger revenues, thanin the same period in 2015. The company said that it lost about $283million, on revenues of $1.15 billion, in the first three months of this year.During the same period last year, it reported revenues of about $940million and a loss of $154 million.

http://www.strokesmart.org/Healthy-Diet

5 Ways Nutrition Can Help Prevent Stroke

Posted by Christa Knox May 12 2016

Most strokes are due to a lack of oxygen to the vascular system in the brain due to a blockage. To lower your risk of blocked arteries and stroke, here are five ways nutrition can support your body:

1. Omega-3 Fatty Acids. Sources: Fish oil, fish, grass-fed meats, flaxseeds, and walnuts.

Omega-3s are Essential Fatty Acids, meaning they are essential for life, but our bodies cannot make them. Look for fish oil that contains a high amount of DHA and EPA and aim to get between 200-1,000mg/day through supplementation or by eating two to four servings of fish a week.

2. Healthy Fats. Sources: Coconut, avocado, cold water fish (i.e., salmon, sardines, halibut), nuts, seeds, and olive oil.

Healthy fats can actually lower your LDL “bad” cholesterol levels and heart disease risk. In fact, the five to 23 years and found that there is no significant evidence for concluding that dietary saturated fat is associated with an increased risk of coronary heart disease, stroke, or cardiovascular disease.

3. Cut out—or cut back on— sugar and wheat. Harmful sources: soda, baked goods, candy, high fructose corn syrup, packaged foods, snacks, juice, and salad dressings.

Swap refined sugar for whole food sweeteners: coconut sugar, honey, maple syrup, fruit. Read labels and steer clear of anything with a sweetener (“healthy” or not) listed in the first five ingredients. Eliminate anything with artificial sweeteners. Refined grains convert to sugar, and studies have shown that glucose can contribute to inflammation and hardened arteries. Stick to whole grains, such as oatmeal, as high-fiber carbohydrates are suggested to produce less inflammation.

4. Get plenty of antioxidants. Sources: Leafy greens, green tea, spices, brightly-colored produce, dark chocolate, coffee, and nuts.

Not only do foods high in antioxidants contain a plethora of vitamins, minerals, and compounds that support your whole system, but they can also prevent the oxidation of your LDL “bad” cholesterol, which reduces your risk of atherosclerosis and stroke.

5. Adopt the Mediterranean Diet, which encapsulates all four recommendations above and is associated with a lower risk of incident coronary heart disease and stroke. The traditional Mediterranean diet is rich in fruits, spices, vegetables, fish, and healthy fats, and is low in refined grains and sweets.

And remember, don’t become overwhelmed about your daily intake or by foods that have been vilified such as potatoes, healthy fats, rice, coffee, fish, tropical fruits, and eggs. At the end of the day, just eat real food.

Christa Knox, MA, MScN, is a stroke survivor and functional nutritionist in Portland, Ore.