https://futurism.com/intel-is-releasing-a-processor-thats-built-for-neural-networks/

Intel is Releasing a Processor That’s Built for Neural Networks

IN BRIEF

Intel is releasing a new microprocessor before the year ends, and it’s designed to run artificial neural networks better than today’s computer chips. The Nervana, as Intel calls it, is going to revolutionize AI computing, especially for businesses.

There’s no denying it, artificial intelligence (AI) is taking over how companies do business, with machine learning powering more and more systems and products — from mobile phones to online services. American tech giant Intel wants to be ahead of this shift, and so they’ve developed a number of technologies designed to advance AI capabilities — including a microprocessor specially designed to run artificial neural networks.

Formerly codenamed Lake Crest, Intel’s first-generation silicon processor for neural networks is called the Nervana Neural Network Processor (NNP), which is powerful enough to handle the intensive computational requirements of running deep neural nets.

Image credit: Intel
Cleverly named, the Nervana NNP is powerful. Image Credit: Intel

“We have multiple generations of Intel Nervana NNP products in the pipeline that will deliver higher performance and enable new levels of scalability for AI models,” CEO Brian Krzanich wrote in a press announcement. Krzanich first announced the Nervana during The Wall Street Journal’s D.Live event on Tuesday. He continued in the announcement that “This puts us on track to exceed the goal we set last year of achieving 100 times greater AI performance by 2020.”

Intel plans to ship this hardware to a small number of its partners before the year ends, but it’s also easily accessible through the company’s Nervana Cloud Service. With this processor, Intel “promises to revolutionize AI computing across myriad industries,” Krzanich added. The company plans to apply the technology to health care, the automotive sector, weather services, and social media. Alongside the Nervana, Intel has also made advances in neuromorphic and quantum computing.

https://www.axios.com/the-pitch-for-a-health-darpa-2498314226.html

The pitch for a health DARPA

A lung on a chip developed by Harvard University Wyss Institute on a DARPA grant. Photo: DARPA

The Defense Advanced Research Projects Agency is known for creative, high-tech research projects that often sound like science fiction. Now, a philanthropic heavyweight and a former DARPA program director together are pushing for the federal government’s health department to have its own version.

The big questions: How would it fit in the health department that also includes the National Institutes of Health? And how will pharmaceutical and other companies be incentivized to take products to market? The answers — and some clever navigating of the potential tensions — could help determine whether an Advanced Research Projects Agency for Health, or HARPA, ever gets off the ground.

The players: Bob Wright, the former CEO of NBC and founder of Autism Speaks, is the main force behind the proposal. He’s tapped Geoffrey Ling, a neurologist at Johns Hopkins and the former director of DARPA’s Biological Technologies Office, to develop the proposed agency and, Wright hopes, lead it.

DARPA gave us the internet, and both say it is worth seeing what the same model could do for much-needed advances in detecting and treating cancers and other diseases.

How it could work: Ling maintains it would complement the discovery work done at NIH: “I’m not saying that HARPA is a panacea and is going to fill all the need areas but it is an approach I can see filling part of these need areas. In my mind, it is just a different way of doing business. It’s an entirely different philosophy.”

The details: They advocate setting up a semi-autonomous body directly under the Department of Health and Human Services — but independent from NIH.

  • HARPA, like DARPA, would be “performance-based, milestone-driven, timeline-driven with the efforts determined by the government,” Ling says.
  • It would center on contracts between the agency and researchers spanning academia, corporate and government agencies.

Harvard’s David Walt, who is not involved in the proposal and was a chair of a now-defunct advisory council of DARPA, says if an ARPA health program is set up correctly and follows the DARPA model closely, it could benefit the health arena. But he points out that health care is a highly regulated environment. “I’d be excited about the prospect of bringing a DARPA-like approach to critical problems in health care but it isn’t the same as implementing a new device in the military.”

What NIH is doing: Taking science from the “bench to bedside” is within the NIH purview — director Francis Collins set up an institute to do just that. And, right now, the NIH can bypass the grant process and distribute funds through a DARPA-like arm called the Common Fund. Their 2017 budget is about $675 million.

About two-thirds of the work within that fund is toward goals set by the NIH with investigators coming up with ways to attempt to reach them, says Betsy Wilder, who directs the Office of Strategic Coordination at the NIH. DARPA also funds biotechnology projects and collaborates with NIH.

The ask: Wright and Ling want two separate chains of command under HHS and two separate budgets within the department.

  • Ling estimates the agency would require a budget of $2 to 3 billion — the equivalent of about 10% of the NIH’s $34 billion for this year.
  • “We’ll be the little sister to the NIH. No problem. But because of the different philosophy and because of the different approach, it needs to go up a separate chain of command. That is absolutely crucial because otherwise it’s going to be viewed as competitive and that just isn’t right. It’s synergistic,” Ling says.
  • The $6.3 billion 21st Century Cures Act authorized by Congress last year could help to launch HARPA, Wright says. Right now, $4.8 billion is slated to go to the NIH and $1 billion to states for opioid treatment and prevention.
  • Wright says, HARPA’s success would depend on its ability to use NCI databases and other government assets.

Where it stands: “We’ve gone to the White House, we’ve gone to Congress, we’ve got bipartisan support,” Wright says.

Ultimately, it would require congressional authorization — but they’re asking the White House to launch it, potentially on a pilot basis. President Trump’s budget proposed cutting money for the NIH, and there is a general trend toward consolidating across the federal government, raising questions about where funding for a new program would come from.

http://www.kurzweilai.net/leading-brain-training-game-improves-memory-and-attention-better-than-competing-method

Leading brain-training game improves memory and attention better than competing method

October 18, 2017

EEGs taken before and after the training showed that the biggest changes occurred in the brains of the group that trained using the “dual n-back” method (right). (credit: Kara J. Blacker/JHU)

A leading brain-training game called “dual n-back” was significantly better in improving memory and attention than a competing “complex span” game, Johns Hopkins University researchers found in a recent experiment.*

These results, published Monday Oct. 16, 2017 in an open-access paper in the Journal of Cognitive Enhancement, suggest it’s possible to train the brain like other body parts — with targeted workouts to improve the cognitive skills needed when tasks are new and you can’t just rely on old knowledge and habits, says co-author Susan Courtney, a Johns Hopkins neuroscientist and professor of psychological and brain sciences.


Johns Hopkins University | The Best Way to Train Your Brain: A Game

The dual n-back game is a memory sequence test in which you must remember a constantly updating sequence of visual and auditory stimuli. As shown in a simplified version in the video above, participants saw squares flashing on a grid while hearing letters. But in the experiment, the subjects also had to remember if the square they just saw and the letter they heard were both the same as one round back.

As the test got harder, they had to recall squares and letters two, three, and four rounds back. The subjects also showed significant changes in brain activity in the prefrontal cortex, the critical region responsible for higher learning.

With the easier complex span game, there’s a distraction between items, but participants don’t need to continually update the previous items in their mind.

((You can try an online version of the dual n-back test/game here and of the digit-span test here. The training programs Johns Hopkins compared are tools scientists rely on to test the brain’s working memory, not the commercial products sold to consumers. )

30 percent improvement in working memory

The researchers found that the group that practiced the dual n-back exercise showed a 30 percent improvement in their working memory — nearly double the gains in the group using complex span. “The findings suggest that [the dual n-back] task is changing something about the brain,” Courtney said. “There’s something about sequencing and updating that really taps into the things that only the pre-frontal cortex can do, the real-world problem-solving tasks.”

The next step, the researchers say, is to figure out why dual n-back is so good at improving working memory, then figure out how to make it even more effective so that it can become a marketable or even clinically useful brain-training program.

* Scientists trying to determine if brain exercises make people smarter have had mixed results. Johns Hopkins researchers suspected the problem wasn’t the idea of brain training, but the type of exercise researchers chose to test it. They decided to compare directly the leading types of exercises and measure people’s brain activity before and after training; that had never been attempted before, according to lead author Kara J. Blacker, a former Johns Hopkins post-doctoral fellow in psychological and brain sciences, now a researcher at the Henry M. Jackson Foundation for Advancement of Military Medicine, Inc. For the experiment, the team assembled three groups of participants, all young adults. Everyone took an initial battery of cognitive tests to determine baseline working memory, attention, and intelligence. Everyone also got an electroencephalogram, or EEG, to measure brain activity. Then, everyone was sent home to practice a computer task for a month. One group used one leading brain exercise while the second group used the other. The third group practiced on a control task. Everyone trained five days a week for 30 minutes, then returned to the lab for another round of tests to see if anything about their brain or cognitive abilities had changed.


Abstract of N-back Versus Complex Span Working Memory Training

Working memory (WM) is the ability to maintain and manipulate task-relevant information in the absence of sensory input. While its improvement through training is of great interest, the degree to which WM training transfers to untrained WM tasks (near transfer) and other untrained cognitive skills (far transfer) remains debated and the mechanism(s) underlying transfer are unclear. Here we hypothesized that a critical feature of dual n-back training is its reliance on maintaining relational information in WM. In experiment 1, using an individual differences approach, we found evidence that performance on an n-back task was predicted by performance on a measure of relational WM (i.e., WM for vertical spatial relationships independent of absolute spatial locations), whereas the same was not true for a complex span WM task. In experiment 2, we tested the idea that reliance on relational WM is critical to produce transfer from n-back but not complex span task training. Participants completed adaptive training on either a dual n-back task, a symmetry span task, or on a non-WM active control task. We found evidence of near transfer for the dual n-back group; however, far transfer to a measure of fluid intelligence did not emerge. Recording EEG during a separate WM transfer task, we examined group-specific, training-related changes in alpha power, which are proposed to be sensitive to WM demands and top-down modulation of WM. Results indicated that the dual n-back group showed significantly greater frontal alpha power after training compared to before training, more so than both other groups. However, we found no evidence of improvement on measures of relational WM for the dual n-back group, suggesting that near transfer may not be dependent on relational WM. These results suggest that dual n-back and complex span task training may differ in their effectiveness to elicit near transfer as well as in the underlying neural changes they facilitate.

http://www.kurzweilai.net/alphago-zero-trains-itself-to-be-most-powerful-go-player-in-the-world

AlphaGo Zero trains itself to be most powerful Go player in the world

Self-taught “superhuman” AI already smarter than its makers
October 18, 2017

(credit: DeepMind)

Deep Mind has just announced AlphaGo Zero, an evolution of AlphaGo, the first computer program to defeat a world champion at the ancient Chinese game of Go. Zero is even more powerful and is now arguably the strongest Go player in history, according to the company.

While previous versions of AlphaGo initially trained on thousands of human amateur and professional games to learn how to play Go, AlphaGo Zero skips this step. It learns to play from scratch, simply by playing games against itself, starting from completely random play.

(credit: DeepMind)

It surpassed Alpha Lee in 3 days, then surpassed human level of play, defeating the previously published champion-defeating version of AlphaGo by 100 games to 0 in just 40 days.

The achievement is described in the journal Nature today (Oct. 18, 2017)


DeepMind | AlphaGo Zero: Starting from scratch


Abstract of Mastering the game of Go without human knowledge

A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.

https://www.raspberrypi.org/blog/n-o-d-es-handheld-linux-terminal/

N O D E’S HANDHELD LINUX TERMINAL

Fit an entire Raspberry Pi-based laptop into your pocket with N O D E’s latest Handheld Linux Terminal build.

N O D E

With interests in modding tech, exploring the boundaries of the digital world, and open source, YouTuber N O D E has become one to watch within the digital maker world. He maintains a channel focused on “the transformative power of technology.”

“Understanding that electronics isn’t voodoo is really powerful”, he explains in his Patreon video. “And learning how to build your own stuff opens up so many possibilities.”

NODE Youtube channel logo - Handheld Linux Terminal v3

The topics of his videos range from stripped-down devices, upgraded tech, and security upgrades, to the philosophy behind technology. He also provides weekly roundups of, and discussions about, new releases.

Essentially, if you like technology, you’ll like N O D E.

Handheld Linux Terminal v3

Subscribers to N O D E’s YouTube channel, of whom there are currently over 44000, will have seen him documenting variations of this handheld build throughout the last year. By stripping down a Raspberry Pi 3, and incorporating a Zero W, he’s been able to create interesting projects while always putting functionality first.

Handheld Linux Terminal v3

With the third version of his terminal, N O D E has taken experiences gained from previous builds to create something of which he’s obviously extremely proud. And so he should be. The v3 handheld is impressively small considering he managed to incorporate a fully functional keyboard with mouse, a 3.5″ screen, and a fan within the 3D-printed body.

Handheld Linux Terminal v3

“The software side of things is where it really shines though, and the Pi 3 is more than capable of performing most non-intensive tasks,” N O D E goes on to explain. He demonstrates various applications running on Raspbian, plus other operating systems he has pre-loaded onto additional SD cards:

“I have also installed Exagear Desktop, which allows it to run x86 apps too, and this works great. I have x86 apps such as Sublime Text and Spotify running without any problems, and it’s technically possible to use Wine to also run Windows apps on the device.”

We think this is an incredibly neat build, and we can’t wait to see where N O D E takes it next!

http://markets.businessinsider.com/news/stocks/Anticipation-Builds-For-Synergy-Global-Forum-With-The-Addition-Of-Wolf-Of-Wall-Street-To-Its-Line-up-1004941160

Anticipation Builds For Synergy Global Forum With The Addition Of Wolf Of Wall Street To Its Line-up

NEW YORKOct. 19, 2017 /PRNewswire/ — Synergy Global Forum (Official Website) has once again announced an iconic keynote speaker to its unprecedented line-up with the addition of Author, Motivational Speaker and Former Stockbroker Jordan Belfort. Belfort will speak on his personal journey, share strategies for building and motivating a winning sales force and discuss ground-breaking Straight Line Persuasion System.

This follows the conference’s announcement of an incredible line-up of speakers, including Naveen Jain, Sir Richard BransonGary VaynerchukJack WelchRobin WrightSteve ForbesNassim Nicholas TalebSimon SinekMalcolm GladwellRay KurzweilJimmy WalesGuy Kawasaki, and Kimberly Guilfoyle.

SPECIAL JORDAN BELFORT’S TICKET PRICE
Thanks to cooperation with Jordan Belfort, Synergy Global Forum is offering a limited amount of tickets for only $400. Click here to learn more.

Belfort has acted as a consultant to more than fifty public companies, and has been written about in virtually every major newspaper and magazine in the world, including The New York Times, The Wall Street Journal, The Los Angeles Times, The London Times, The Herald Tribune, Le Monde, Corriere della Serra, Forbes, Business Week, Paris Match and Rolling Stone.

He just released his newest book “Way of the Wolf” Straight Line Selling: Master the art of persuasion, influence, and success. His proprietary Straight Line System allows him to take virtually any company or individual, regardless of age, race, sex, educational background or social status, and empower them to create massive wealth, abundance, and entrepreneurial success, without sacrificing integrity or ethics.

Belfort’s two international bestselling memoirs, “The Wolf of Wall Street” and “Catching the Wolf of Wall Street”, have been published in over forty countries and translated into eighteen languages. His life story has even been turned into a major motion picture, starring Leonardo DiCaprio and directed by Martin Scorsese (2013).

Known as one of the world’s premier business events, Synergy Global Forum’s goal is to offer professionals and entrepreneurs in Europe, Asiaand North America an unforgettable experience where they have a chance to learn, network, share, grow and have fun with the brightest minds alive. During the event, 11 global icons will gather on one stage to discuss today’s most cutting edge topics related to leadership, innovation, strategy, technology, social networks and “fake news,” entrepreneurship, performance, efficiency, social responsibility, intellect and growth. A program can be downloaded by visiting www.synergyglobalforum.com.

The event is supported by MIT Management Executive Education, NYU Stern, Ivy, and Shapr.

Registration is open for the Forum on October 27-28, which will include the most recognizable and diverse lineup of speakers. For over two days 5,500 participants will be able to network with the best professionals and influencers in numerous industries.

Opportunities for discounted group rates, student rates and hotel fairs are also available.

About Synergy Global Forum
“A Master Class in Disruption” presented by Synergy Global Forum 2017 dedicates itself to offering professionals and entrepreneurs in Europe, Asia and North America an unforgettable experience where they have a chance to learn, network, share, grow and have fun with the brightest minds alive. Synergy Global Forum will take place October 27-28 in New York City’s iconic Madison Square Garden. For more information and to register, please visit www.synergyglobalforum.com.

 

View original content:http://www.prnewswire.com/news-releases/anticipation-builds-for-synergy-global-forum-with-the-addition-of-wolf-of-wall-street-to-its-line-up-300540134.html

SOURCE Synergy Global Forum

https://techcrunch.com/2017/10/19/apple-makes-the-case-that-even-its-most-banal-features-require-a-proficiency-in-machine-learning/

Apple makes the case that even its most banal features require a proficiency in machine learning

Apple makes the case that even its most banal features require a proficiency in machine learning

Amidst narratives of machine learning complacency, Apple is coming to terms with the fact that not talking about innovation means innovation never happened.

A detailed blog posting in the company’s machine learning journal makes public the technical effort that went into its “Hey Siri” feature — a capability so banal that I’d almost believe Apple was trying to make a point with highbrow mockery.

Even so, it’s worth taking the opportunity to explore exactly how much effort goes into the features that do, for one reason or another, go unnoticed. Here’s five things that make the “Hey Siri” functionality (and competing offerings from other companies) harder to implement than you’d imagine and commentary on how Apple managed to overcome the obstacles.

It had to not drain on your battery and processor all day

At its core, the “Hey Siri” functionality is really just a detector. The detector is listening for the phrase, ideally using fewer resources than the entirety of server-based Siri. Still, it wouldn’t make a lot of sense for this detector to even just suck on a device’s main processor all day.

Fortunately, the iPhone has a smaller “Always On Processor” that be used to run detectors. At this point in time, it wouldn’t be feasible to smash an entire deep neural network (DNN) onto such a small processor. So instead, Apple runs a tiny version of its DNN for recognizing “Hey Siri.”

When that model is confident it has heard something resembling the phrase, it calls in backup and has the signal captured analyzed by a full size neural network. All of this happens in a split second such that you wouldn’t even notice it.

All languages and ways of pronouncing “Hey Siri” had to be accommodated

Deep learning models are hungry and suffer from what’s called the cold start problem — the period of time where a model just hasn’t been trained on enough edge cases to be effective. To overcome this, Apple got crafty and pulled audio of users saying “Hey Siri” naturally and without prompting, before the Siri wake feature even existed. Yeah I’m with you, this is weird that people would attempt to have real conversations with Siri but crafty nonetheless.

These utterances were transcribed, spot checked by Apple employees and combined with general speech data. The aim was to create a model robust enough that it could handle the wide range of ways in which people say “Hey Siri” around the world.

Apple had to address the pause people would place in between “Hey” and “Siri” to ensure that the model would still recognize the phrase. At this point, it became necessary to bring other languages into the mix — adding in examples to accommodate everything from French’s “Dis Siri” to Korean’s “Siri 야.”

It couldn’t get triggered by “Hey Seriously” and other similar but irrelevant terms

It’s obnoxious when you are using an Apple device and Siri activates without intentional prompting, pausing everything else — including music, the horror! To fix this, Apple had to get intimate with the voices of individual users.

When users initiate Siri, they say five phrases that each begin with “Hey Siri.” These examples get stored and thrown into a vector space with another specialized neural network. This space allows for the comparison of phrases said by different speakers. All of the phrases said by the same user tend to be clustered and this can be used to minimize the likelihood that one person saying “Hey Siri” in your office will trigger everyone’s iPhone.

And worst case scenario, the phrase passes muster locally and still really isn’t “Hey Siri,” it gets one last vetting from the main speech model on Apple’s own servers. If the phrase is found to not be “Hey Siri,” everything immediately gets canceled

Activating Siri had to be just as easy on the Apple Watch as the iPhone

The iPhone might seem limited in horsepower when compared to Apple’s internal servers, but the iPhone is a behemoth when compared to the Apple Watch. The watch runs a distinct model for detection that isn’t as large as the full neural network running on the iPhone or as small as the initial detector.

Instead of always running, this mid-sized model only listens for the “Hey Siri” phrase when a user raises their wrist to turn the screen on. Because of this and the ensuing potential delay in getting everything up and running, the model on the Apple Watch is specifically designed to accommodate variations of the target phrase that are missing the initial “H” sound.

It had to work in noisy rooms

When evaluating its detector, Apple uses recordings of people saying “Hey Siri” in a variety of situations — in a kitchen, car, bedroom, noisy restaurant, up close and far away. The data collected is then used for benchmarking accuracy and further tuning the thresholds that activate models.

Unfortunately my iPhone still doesn’t understand context and Siri was triggered so many times while I was proof reading this piece aloud that I tossed my phone across the room.

https://www.bloomberg.com/news/articles/2017-10-19/model-3-reliability-likely-to-be-average-consumer-reports-says

Tesla Model 3 Reliability Likely to Be Average, Consumer Reports Says

Why Apple Shares Are Seeing Worst Day Since August
Wall Street Traders Fall Prey to U.S. Opioid Crisis

Tesla Delivers the New Model 3, Here’s a First Look

Tesla Inc. probably will pull off average reliability with the Model 3 sedan Elon Musk is counting on to dramatically expand demand for his electric cars, according to Consumer Reports magazine.

Consumer Reports made the prediction based on the amount of technology the Model 3 shares with the larger Model S sedan, which the magazine’s subscribers rated above average for the first time in an annual survey. Tesla still ranks among the bottom third of the 27 brands, due to continued struggles with its Model X crossover.

The closely watched source of product recommendations is making a forecast Musk will be able to hit the mark with the first model he’s attempted to mass manufacture. The Model X exemplifies Tesla’s checkered history of new product introductions, which larger automakers with decades of experience assembling in high volumes still struggle with, Consumer Reports said.

“They realize that it’s important to get this car right,” Jake Fisher, the magazine’s director of auto testing, said of Tesla in an interview. “We would’ve not predicted average for the Model 3 unless we saw above-average data for the Model S. If the Model S was still just average, we would’ve not made that prediction.”

Consumer Reports has built credibility for its ratings by refusing advertising from automakers and paying for the vehicles it evaluates. The magazine’s views on Tesla have been up and down: The Model S scored an off-the-charts rating in 2015, but the magazine pulled its recommendation two months later after owners reported an array of quality problems. In April, Tesla models were downgraded after owners went months without an automatic braking system. By July, the safety feature was fully restored and the Model S and Model X recouped the points they lost.

“Consumer Reports has not yet driven a Model 3, let alone do they know anything substantial about how the Model 3 was designed and engineered,” a Tesla spokeswoman said in an emailed statement. “Time and time again, our own data shows that Consumer Reports’ automotive reporting is consistently inaccurate and misleading to consumers.”

Shares of the Palo Alto, California-based company were down 2.3 percent to $351.28 at 3:40 p.m. New York time. The stock is up 65 percent this year.

For Electric Cars, Bright Future, Paltry Present: QuickTake Q&A

The magazine cited the relative mechanical simplicity of electric cars as another reason the Model 3 probably will be average — a tough feat to pull off with new models that tend to experience “growing pains.” The car will likely lag behind General Motors Co.’s electric Chevrolet Bolt, which launched with above-average reliability, Consumer Reports said.

Tesla has delayed the unveiling of an electric semi truck and fired workers this month following a slower-than-projected start of production for the Model 3. The Palo Alto, California-based company has said it’s dealing with unspecified bottlenecks and said the dismissals were related to performance reviews.

Consumer Reports routinely predicts reliability for vehicles that are new to the market, based on manufacturers’ track record and factors including the number of components carried over from previous models.

Best, Worst

The improvement by the Model S made Tesla one of the biggest gainers in this year’s rankings, with the brand jumping four spots to 21st out of 27. The Model X remains “terrible” in terms of reliability, Fisher said.

Tesla’s SUV and GM’s Cadillac Escalade were the two vehicles with the most problems in the survey, Fisher said in a presentation Thursday to the Automotive Press Association in Detroit.

Kia Motors Corp.’s Niro crossover was the least problematic. Much of Kia’s technology goes into production first in Hyundai models, so the bugs get worked out before Kia uses it, Fisher said.

Biggest Movers

Fiat Chrysler Automobiles NV’s Chrysler was the most improved in this year’s reliability survey, vaulting 10 spots to rank 17th. The brand discontinued its poor-performing 200 sedan, and the new Pacifica minivan rated average.

Volkswagen AG’s namesake brand climbed six spots to 16th. Subaru Corp.’s line of vehicles passed five other brands to rank sixth, while BMW, Tesla and Fiat Chrysler’s Ram all gained four spots.

The biggest decliner this year was Honda Motor Co.’s Acura. Each of its models rated below average with the exception of the redesigned RDX crossover, dropping the brand seven spots to No. 19. Mazda Motor Corp. and GM’s Cadillac both dropped six spots.

Problems with transmissions and infotainment systems were the most common in the survey, Fisher said during his presentation. Transmission issues are weighted more heavily, because they are harder to fix, he said.

American Split

All four of GM’s brands fell, with GMC and Cadillac finishing in the bottom two and Buick slipping five spots to rank eighth.

The Detroit-based company’s reliability tends to follow its product cycle, according to Consumer Reports. When GM has more new models, as it does now, there are more problems.

At the same time, Fiat Chrysler — long a laggard in the magazine’s surveys and other quality studies — improved with each of its traditional U.S. brands.

Still on Top

Toyota Motor Corp.’s namesake brand jumped ahead of its luxury line Lexus to take first place in this year’s rankings.

This is the fifth straight year the brands have been the two highest-scoring in the industry. Kia moved up two spots to No. 3, Volkswagen’s Audi held steady at No. 4, and BMW rounded out the top five.

https://www.theverge.com/2017/10/19/16502624/microsoft-google-security-patches-chrome-bug

Microsoft hits back at Google’s approach to security patches

Microsoft’s Windows security team haven’t been happy with Google for the past year. While the pair are bitter rivals for a number of different reasons, Google disclosed a major Windows bug before Microsoft was ready to patch it last year. It irritated the company so much that Windows chief Terry Myerson authored a blog post criticizing Google for not disclosing security vulnerabilities responsibly. That resentment still remains today.

Microsoft discovered a remote Chrome vulnerability last month and is now demonstrating what it feels is responsible disclosure. In a new blog post, Microsoft’s Windows security team outlines a remote code execution issue in Chrome, and criticizes Google’s approach to security patches. “We responsibly disclosed the vulnerability that we discovered along with a reliable remote code execution exploit to Google on September 14, 2017,” explains Jordan Rabet, a Microsoft Offensive Security Research team member. Google patched the problem within a week in its beta versions of Chrome, but the stable and public channel “remained vulnerable for nearly a month.”

That wouldn’t normally be an issue for most software patches, but Microsoft criticizes Google’s approach of making the source code for the fix available on Github ahead of the stable channel fix. That gave attackers a month to discover the flaw. Rabet calls it “problematic when the vulnerabilities are made known to attackers ahead of the patches being made available.”

Despite these jabs, Microsoft’s long and detailed blog post is more about reminding the industry about its position on disclosing security patches. Microsoft takes the opportunity, more than once, to point out that it disclosed the Chrome bug privately, and that it will continue to do this to promote its approach across the industry.

Google has been criticized for its approach to vulnerability disclosures, allowing engineers to disclose details seven days after they’re reported to vendors. The search giant regularly finds and discloses security issues in Microsoft’s software, and occasionally publishes details before products are patched. It’s this approach that has angered Microsoft so much, and it’s clear the company will take any opportunity to call Google out on it.