APPLE’S MACHINE LEARNING ENGINE COULD SURFACE YOUR IPHONE’S SECRETS
OF THE MANY new features in Apple’s iOS 11—which hit your iPhone a few weeks ago—a tool called Core ML stands out. It gives developers an easy way to implement pre-trained machine learning algorithms, so apps can instantly tailor their offerings to a specific person’s preferences. With this advance comes a lot of personal data crunching, though, and some security researchers worry that Core ML could cough up more information than you might expect—to apps that you’d rather not have it.
Core ML boosts tasks like image and facial recognition, natural language processing, and object detection, and supports a lot of buzzy machine learning tools like neural networks and decision trees. And as with all iOS apps, those using Core ML ask user permission to access data streams like your microphone or calendar. But researchers note that Core ML could introduce some new edge cases, where an app that offers a legitimate service could also quietly use Core ML to draw conclusions about a user for ulterior purposes.
“The key issue with using Core ML in an app from a privacy perspective is that it makes the App Store screening process even harder than for regular, non-ML apps,” says Suman Jana, a security and privacy researcher at Columbia University, who studies machine learning framework analysis and vetting. “Most of the machine learning models are not human-interpretable, and are hard to test for different corner cases. For example, it’s hard to tell during App Store screening whether a Core ML model can accidentally or willingly leak or steal sensitive data.”
The Core ML platform offers supervised learning algorithms, pre-trained to be able to identify, or “see,” certain features in new data. Core ML algorithms prep by working through a ton of examples (usually millions of data points) to build up a framework. They then use this context to go through, say, your Photo Stream and actually “look at” the photos to find those that include dogs or surfboards or pictures of your driver’s license you took three years ago for a job application. It can be almost anything.
‘It’s hard to tell during App Store screening whether a Core ML model can accidentally or willingly leak or steal sensitive data.’
Suman Jana, Columbia University
For an example of where that could go wrong, thing of a photo filter or editing app that you might grant access to your albums. With that access secured, an app with bad intentions could provide its stated service, while also using Core ML to ascertain what products appear in your photos, or what activities you seem to enjoy, and then go on to use that information for targeted advertising. This type of deception would violate Apple’s App Store Review Guidelines. But it may take some evolution before Apple and other companies can fully vet the ways an app intends to utilize machine learning. And Apple’s App Store, though generally secure, does already occasionally approve malicious apps by mistake.
Attackers with permission to access a user’s photos could have found a way to sort through them before, but machine learning tools like Core ML—or Google’s similar TensorFlow Mobile—could make it quick and easy to surface sensitive data instead of requiring laborious human sorting. Depending on what users grant an app access to, this could make all sorts of gray behavior possible for marketers, spammers, and phishers. The more mobile machine learning tools exist for developers, the more screening challenges there could be for both the iOS App Store and Google Play.
Core ML does have a lot of privacy and security features built in. Crucially, its data processing occurs locally on a user’s device. This way, if an app does surface hidden trends in your activity, and heartbeat data from Apple’s Health tool, it doesn’t need to secure all that private information in transit to a cloud processor and then back to your device.
That approach also cuts down on the need for apps to store your sensitive data on their servers. You can use a facial recognition tool, for instance, that analyzes your photos, or a messaging tool that converts things you write into emojis, without that data ever leaving your iPhone. Local processing also benefits developers, because it means that their app will function normally even if a device loses internet access.
iOS apps are only just starting to incorporate Core ML, so the practical implications of the tool remain largely unknown. A new app called Nude, launched on Friday, uses Core ML to promote user privacy by scanning your albums for nude photos and automatically moving them from the general iOS Camera Roll to a more secure digital vault on your phone. Another app scanning for sexy photos might not be so respectful.
A more direct example of how Core ML could facilitate malicious snooping is a projectthat takes the example of the iOS “Hidden Photos” album (the inconspicuous place photos go when iOS users “hide” them from the regular Camera Roll). Those images aren’t hidden from apps with photo access permissions. So the project converted an open-source neural network that finds and ranks illicit photos to run on Core ML, and used it to comb through test examples of the Hidden Photos album to quickly rate how salacious the images in it were. In a comparable real-world scenario, a malicious dev could use Core ML to find your nudes.
Researchers are quick to note that while Core ML introduces important nuances—particularly to the app-vetting process—it doesn’t necessarily represent a fundamentally new threat. “I suppose CoreML could be abused, but as it stands apps can already get full photo access,” says Will Strafach, an iOS security researcher and the president of Sudo Security Group. “So if they wanted to grab and upload your full photo library, that is already possible if permission is granted.”
The easier or more automated the trawling process becomes, though, the more enticing it may look. Every new technology presents potential gray sides; the question now with Core ML is what sneaky uses bad actors will find for it along with the good.
Can a supercomputer for web apps challenge the iPad Pro?
There are many questions about the Pixelbook, a new $999 Chromebook manufactured by Google. Who should buy this thing? Who would want to? Is it supposed to compete with MacBooks? Surface tablets? Windows convertibles? iPads? iPad Pros? All of the above? Can a laptop that is mainly designed to run the Chrome browser (with a side of Android apps) really be worth spending this much money on? Is Google high?
The answers to all of those questions depend entirely on something that’s different for everybody: what do you do when you really push your computer? For some, it’s video editing. For others, it’s photography or gaming or Excel or whatever. For 90 percent of what most of us do on computers, anything decently fast and nice will get the job done. But for that last 10 percent, everybody needs something different.
So it’s impossible to give you all the answers, even after spending just under two weeks using the Pixelbook as my main computer. I set aside both my MacBook and my iPad Pro and just used this machine to see if I could finally figure out that last 10 percent for myself.
Before we get into all that, let’s talk about this hardware. It’s superb. Laptops, even convertible laptops (lamentably called 4-in-1s now), have been around a long time, and I figured I’d seen pretty much every variation of them. But Google has created an industrial design that is not only unique, but uniquely functional.
Closed, it’s just under half an inch thick and weighs just over two and a half pounds. It’s all aluminum and gorilla glass, and it’s as sturdy as anything I’ve used. The keyboard has plenty of travel; it feels way better than the keyboard on most laptops this thin, especially compared to the MacBook or a Surface Pro. And, in a sad rarity for Chromebooks, it’s properly backlit.
There are subtle and not-so-subtle design elements that artfully combine form and function. The most prominent design element is the glass shade on the back, which is there to allow more wireless signal through and also provide visual symmetry. Three of the four sides of this laptop have a symmetrical, white panel on them.
The white panel on the keyboard deck is the most interesting. The palmrests here are made of silicone. Google insists that it won’t yellow over time or get too dirty; Google engineers have been taping the material on old laptops to test it for a year now. It gives a nicer feel when you’re typing compared to cold aluminum. But it also has two other functions: to keep the screen from getting pressed against the keys, and to serve as anti-skid pads when you’re using it in tablet or easel mode.
The display is a 12.3-inch touchscreen in a 3:2 aspect ratio, with a resolution of 2400 x 1600. It looks great and gets bright enough to use in sunlight, but be warned: it’s super reflective. The only real problem with the screen is actually with the bezels that surround it. They’re too big. Google says it’s to ensure thinness and to make it easier to hold in tablet mode, but other companies have figured out how to make them smaller.
As for specs, the base model comes with a seventh-generation Intel Core i5 processor, 8GB of RAM, and 128GB of storage. Almost all of those numbers are overkill for what most people think they want on a Chromebook, but here they’re put to good use. The processor makes it fast, the RAM lets you have more tabs and apps open, and the storage is for movies and music and games you store with Android apps. As with a lot of recent machines, Google makes it all work without the need for a fan, and I haven’t had any problems with heat on the Pixelbook.
The trackpad is glass and is fast and accurate — though if you wanted to gripe you could say that when you squeeze the closed laptop you can feel it click. One nice touch is that the speakers fire up out of the base of the laptop into the hinge, so in most configurations the sound isn’t muffled like you’ll find on other convertibles.
Google’s other hardware this fall has had some problems, but the Pixelbook is stellar. It’s elegant, sturdy, fast, and smartly designed. If you’re wondering where your thousand bucks is going, there’s your answer. Judged just as a physical object, it’s my favorite laptop of the year.
But of course, you don’t buy a laptop to just to appreciate its industrial design. You have to actually use the thing. Which brings us to the software.
The usual rules for Chrome OS apply: you can run web apps and web apps designed specifically for Chrome. You can save any webpage as a “window” that opens up like an app instead of just another tab in your browser. You can split-screen windows (in laptop mode, anyway), save stuff in a file system, and get stuff directly from Google Drive. You can do it all with the trackpad and keyboard, or by tapping on the screen. (Even small buttons seem easy to hit.) Perhaps most importantly, it’s a rock-simple and rock-solid OS that gets security and OS updates every six weeks or so, almost without fail.
Taken just at that level, as a laptop designed to do Chrome stuff, the Pixelbook could justify its asking price for a fair number of people. Usually, when I talk about Chromebooks, I have to give a little spiel about how many tabs you can have open at any given time. Using this one, I have yet to reach that limit and it has almost never felt bogged down. It is basically a supercomputer for web apps.
I’m impressed by performance, but have found battery life to be nothing to write home about. Google claims about 10 hours of mixed use, but I haven’t quite hit that. Admittedly, I’ve pushed this machine pretty hard, but I wouldn’t trust it past eight hours or so, unless you’re being super diligent about managing power. At least it charges super fast via either of the two USB Type-C ports.
Google has done some things to the basic OS to improve the experience. The launcher has been redesigned as a big black shade with icons that better distinguish between web apps and Android apps. There’s also a Google search bar at the top of it. You use the same search bar for the web and for local, on-device searches. It’s similar to what you can do with Cortana on Windows or Spotlight on the Mac.
But, perhaps confusingly, there’s another place to type (or speak) your queries: the Google Assistant. Google gave it its own button on the keyboard: when it pops up, you just type your question. If you’re typing, it responds silently with the answer. If you say “OK Google,” it responds audibly.
It works just as well here as it does on your phone or on a Google Home. And it can also read your screen, presenting a best-guess answer based on what it sees on your display right when you first open it up. It pays special attention to anything you’ve highlighted, which is super smart. You can also circle things with the Pixelbook Pen (more on that in a bit) to do a more specific search on an image.
My only complaint about using Chrome and web apps on the Pixelbook is when you flip the screen around into tablet mode. For now (though changes are coming), everything defaults to full screen whenever you flip it around. When you flip it back, your windows end up willy nilly all over the place. There’s also no simple way to set up virtual desktops, as you can on Windows or Mac.
With the Pixelbook, Android apps on Chrome OS are officially out of Beta. That Beta period lasted a long time, and much of it was not fun at all to use. But now that these apps are official, everything’s fine, right?
Well. Let’s get into it.
Here’s the good news: Android apps run very well on the Pixelbook, with nearly none of the showstopping issues I had before. Games don’t stutter unless you have a lot of background stuff running, nothing freezes up the machine, and apps aren’t randomly quitting. This is where the Pixelbook’s high-end hardware proves its worth, but some of the better performance is thanks to bug fixes as well.
That is the lowest possible bar, of course, so here is some slightly higher praise. Some apps, especially Microsoft Office apps, Netflix, and several Google apps are genuinely great. You can resize many of them like regular windows instead of being forced to look at either weird phone-sized rectangles or blown-up full-screen versions. Their interactions with regular Chrome windows aren’t seamless (drag and drop is nonexistent), but they basically feel integrated with the rest of the OS.
Even other apps that aren’t optimized for the Pixelbook’s large screen are still just nice to have. I’ve saved a bunch of Spotify playlists offline. Having Facebook Messenger available as a little Android app is a little more convenient than the web app. There’s just a ton of little things that are more useful to do inside an Android app than in a web app.
The problem is that the work of uniting Android land and Chrome land isn’t done yet — and in many cases they don’t even speak the same language. Using it, you feel like a dual citizen who doesn’t feel totally at home in either country. Adobe Lightroom CC, for example, is a fairly powerful Android app now, but it thinks it’s running on an Android device (I mean, technically it is), so it sometimes doesn’t see stuff Chrome OS can, like unzipped folders.
The dual citizen problem extends to other areas. There are lots of apps that have both web and Android versions, so you’ll need to choose one. Gmail’s Android app does a better job offline, but the web app is more powerful. Pick one, sure, but then you have to pick which version of the app is going to give you notifications. Same for Slack, which works better in tablet mode as an Android app, but better as a web app when you have the keyboard out.
Even if you get to a place where it all makes sense (and I feel like I’m pretty close), you still run into head-slappingly silly moments like switching to tablet mode and finding out that the Spotify app is a tiny little rectangle floating in the middle of a vast, black expanse. I get why that’s happening, sure, but I also know it should not be like that. Users shouldn’t have to think about whether apps are using the right APIs for window sizing.
Most of these problems can be solved with better, updated versions of Android apps that understand that they’re running on a Chromebook. But let’s not pretend that it’s a good idea to trust that Android apps will be updated to work better on big, tablet-sized screens. Google’s struggled with this exact issue for years.
The other headline feature — or option, rather — is the Pixelbook Pen. It’s a separate, $99 active stylus that lets you do stylus things on the Pixelbook. It was developed in collaboration with Wacom, so it can detect angle and pressure. Unlike Apple’s Pencil and its ridiculous lightning port power plug, the Pixelbook Pen just uses a AAAA battery that should be good for about a year. You don’t even have to bother pairing it over Bluetooth. It just works.
Except that it only “just works” with Android and web apps that have been updated to support Google’s latest APIs. On those apps, it operates with minimal (sometimes undetectable) lag. On apps that don’t use the latest code, the lag between moving the pen and seeing pixels on the screen is downright atrocious.
I am not really a stylus kind of guy, so I’m not a great judge of this pen. But after talking with my colleague James Bareham and trying out the pen with a bunch of apps, the gist is that, as a piece of hardware, Google did all the right things here. But hardware needs software, and there’s just not enough support yet to say if this thing is worth the extra hundred bucks.
One trick I do like: you can hold the single button down on the pen and then circle something on the screen to do a search with Google Assistant. It works, and it’s fun. But that’s a bad reason to buy the pen.
When I think about whether the Pixelbook could reasonably replace a MacBook or a Windows laptop, my gut says that, for most people, the answer is “no.” To solve the “last 10 percent” on a Pixelbook, you really have to be very savvy about how to navigate the different computing paradigms of Chrome and Android to make the whole thing work — and even then, it’s not easy. Unless you’re an expert in the ways of both the web and Android, it shouldn’t be your only computer.
If I were Apple or Microsoft, I would be thinking a lot about the generation of students who are savvy with Chromebooks and Android apps, and who might just want the same thing they’re used to from their classroom, just in a much nicer package. I don’t know that it’ll happen this year, though.
Honestly, I think the iPad Pro is a better comparison. On both devices, you can get quite a lot more done than you’d expect, but you have to deeply understand how the platform works to get there. And if you’re debating between them, here’s the TL;DR: the iPad Pro has better apps, is a tablet-first device, and has a worse web browser. The Pixelbook has worse apps, is a laptop-first device, and has a better web browser.
Just like the iPad Pro, the Pixelbook is an incredibly nice and powerful machine that can handle most of your computing tasks — but probably not all of them.
DeepMind wants to find the next miracle material—experts just don’t know how they’ll pull it off
Artificial intelligence has historically over-promised and under-delivered. That routine leads to spurts of what those in the field call “hype”—outsized excitement about the potential of a core technology—followed after a few years and several million (or billion) dollars by crashing disappointment. In the end, we still don’t have the flying cars or realistic robot dogs we were promised.
But DeepMind’s AlphaGo, a star pupil in a time we’ll likely look back on as a golden age of AI research, has made a habit of blowing away experts’ notions of what’s possible. When DeepMind announced that the AI system could play Go on a professional level, masters of the game said it was too complex for any machine. They were wrong.
Now AlphaGo Zero, the AI’s latest iteration, is being set to tasks outside of the 19×19 Go board, according to DeepMind co-founder Demis Hassabis.
“Drug discovery, proteins, quantum chemistry, material design—material design, think about it, maybe there is a room-temperature superconductor out and about there,” Hassabis said. “I used to dream about that when I was a kid reading through my physics books. That would be the Holy Grail, a superconductor discovery.”
So what’s hype, and what’s reality? Can AlphaGo Zero, itself considered impossible just a few years ago, be the tool that finally gets us to an often-promised future? Or is DeepMind falling into the Silicon Valley trap that every problem can be solved with a better algorithm?
When discussing the possibilities, Hassabis outlined two criteria for AlphaGo Zero to be effective at a given task. (We’ll refer to the AI as Zero from now on, because it’s shorter and I like to imagine the algorithm as the plucky hole-digger of the same name from Louis Sachar’s young adult novel, Holes):
Zero needs a realistic simulator for the environment it’s working in (in Go, that was a simulated board). The simulator is important because it allows Zero to test faster than physically possible. In other words, it doesn’t need to actually move pieces to play 5 million Go games—the games are virtual, and multiple games are played in parallel in a matter of seconds.
There needs to be an “objective function.” In computer science, that’s just a number to optimize, i.e. make smaller or bigger. Versions of AlphaGo, for example, optimized for the projected percentage that the AI would win the game. In something like materials science, this number could be conductivity. That’s typically the easy part to come up with in a new problem.
Machine learning has been around in materials science since the early 2000s, and algorithms in use today already do much of what DeepMind suggests Zero could do. Gerbrand Ceder, head of the CEDER experimental materials design school at Berkeley, says that algorithms currently used by materials scientists analyze the characteristics that makes a material ideal for a certain property—whether that be conductivity or something else—and then look for other compounds with similar characteristics that haven’t been specifically tested yet for the specific criteria. If none exist, they try to generate a compound that would fit the bill. Scientists then get a curated list of high-potential compounds, which speeds up the physical testing process in the lab. These discoveries have already helped with research into optimizing the lithium-ion batteries in phones and electric cars. Some specialized simulations can replace experiments in the lab, but Ceder says machine learning isn’t needed for those problems, since we can already compute them extremely quickly.
But the technology’s use is still nascent; three experts who spoke to Quartz attributed that to the relatively small amount of data. Simulators, like Zero requires, are built on having enough data to predict how an action would take place in the real world—we just haven’t done enough experiments to accurately build a versatilesimulator. Even if we did, the molecular world is a lot more complex than a Go board, says Evan Reed, who leads the computational materials science group at Stanford University.
“You could try to couple this with a physics-based code, but there are no physics-based codes that predicts the critical temperature of a high-temperature superconductor,” Reed says. “There are some problems where you just can’t couple it with another algorithm.”
Reed says that with the quantity of a material typically used, there are around 10^23 atoms; for a compound like steel, there are nearly innumerable ways those 10^23 atoms could be configured over a period of time, each configuration producing a different set of attributes for the material. A simulator would first have to be able to model all of those possibilities, before it could even begin to run the millions of times required for Zero to learn how it works.
“You need an algorithm that calculates, using quantum mechanics, the properties of this material with large numbers of atoms, and you’ve got to do it many many times to sample lots of different possible atomic configurations,” Reed says. “Right now, today, that’s a completely intractable problem.”
DeepMind declined to comment.
Valentin Stanev, who does materials science research at the University of Maryland, suggests that no normal machine would be able to work efficiently enough to compute all this data. In this case, no AI is the field’s savior, but instead a shift in how computers work will bring new discoveries. He’s hoping quantum computing, an experimental branch of computing that flexes the known laws of physics to process data more efficiently, will be able to tackle these endless complex problems.
“Imagine playing the game of Go, but instead of making one move at a time, you make all the moves, just with different probabilities,” Stanev says. “We cannot really solve the problem [without quantum computing].”
Gerbrand Ceder at Berkeley says that the only way to really get data up to Zero’s simulator-learning needs would be to physically automate experimentation in the real world.
“The equivalent [of Zero’s Go data] would be if we could set up self-experimenting: Could we make a machine that takes a lot of compositions, makes the stuff, measures the properties, and then iterates on it?” he said. “You would have to automate all the experimental steps—which by the way, should be done. This is kind of why materials science lives in the Stone Age; this is what makes it so slow.”
That testing process might prove necessary across the sciences, including drug discovery, which Hassabis mentioned. Startups like Atomwise have been working on AI approaches to virtually simulating drug interactions for years, but they still only make progress once those drugs are tested in the lab and iterated. Atomwise is now involved with 37 research projects.
“There really are not shortcuts where the technology off the shelf swaps out a Go board for some application in this domain and it just works like magic,” says Atomwise co-founder and COO Alexander Levy. “There are a lot of details in practice.”
SolarCity, the solar energy company Tesla acquired last year, has fired hundreds of additional workers, according to six anonymous sources who talked to CNBC. These dismissals are in addition to hundreds more that were reported earlier this month, and they are on top of previously announced layoffs, CNBC reports. All told, around 1,200 people at Tesla and SolarCity have lost their jobs in the recent wave of firings, employees told CNBC.
Reached by e-mail, Tesla referred back to a statement sent out earlier this month when the initial firings were announced. “As with any company, especially one of over 33,000 employees, performance reviews occasionally result in employee departures,” the company wrote in early October. The company says that recent departures were part of the same review process. The company also emphasized that the process also led to “recognition of top performers with additional compensation and equity awards.”
Some employees weren’t happy with how the process was carried out. SolarCity employees told CNBC that they “were surprised to be told they were fired for performance reasons, claiming Tesla had not conducted performance reviews since acquiring the solar energy business.”
CNBC says some employees were fired individually, while others were fired in group meetings. Three employees told CNBC that they asked for copies of their negative performance reviews but hadn’t gotten them. A Tesla spokesperson declined to comment on how the review process was conducted.
In September, SolarCity announced around 200 layoffs in offices in Roseville, California. Those notifications were required by California’s WARN Act, which requires advanced notice if companies lay off more than 50 people. But the new round of job losses extends well beyond that office, employees said, with firings in Nevada, Arizona, Utah, and elsewhere.
SolarCity signaled in May that it was ending door-to-door sales. The company said at the time that affected employees would be considered for other jobs within the company.
Tesla earnings: Model 3 production, demand under the microscope
Quarterly loss likely to be brushed off as investors look for update on sales and manufacturing issues
Getty Images
By
CLAUDIAASSIS
REPORTER
Through delays, recalls, and wildly optimistic projections, Wall Street has stuck with Tesla Inc. shares, which have gained three times as much as the benchmark this year.
Tesla TSLA, -3.28% is expected to report a third-quarter loss after the bell Wednesday, but that too is likely to be brushed off.
All eyes will be on whether the company can make a good case that it has worked out all the kinks with the production of the Model 3, its first electric car aimed at the masses.
Moreover, if Tesla is able to show that demand for the Model 3 continues to be strong and that the market for its luxury vehicles is not reaching saturation, investors will look past the quarterly numbers, said Bill Selesky, an analyst with Argus Research.
Production shortfalls could become more of a concern in 12 to 18 months, but right now investors are more interested in hearing from the demand side.
“That’s going to be the key factor,” Selesky said. “Most people realize the production ramp will be difficult to get to.”
Here’s what to expect from Tesla’s earnings:
Earnings: Analysts surveyed by FactSet expect Tesla to report an adjusted loss of $2.31 a share in the quarter, versus adjusted earnings of 71 cents a share in the third quarter of 2016. Tesla has posted three consecutive quarterly losses.
Estimize, a crowdsourcing platform that gathers estimates from Wall Street analysts as well as buy-side analysts, hedge-fund managers, company executives, academics and others, has projected an adjusted loss of $2.14 a share.
Revenue: The FactSet revenue consensus is $2.94 billion, which would be down from $2.30 billion in the same quarter last year. Estimize is projecting revenue of $2.97 billion.
Stock price: Tesla’s stock has performed strongly despite the recent ups and downs. Shares have gained more than 52% this year, compared with gains of around 14% for the S&P 500 index SPX, -0.51% and 18% for the Dow Jones Industrial AverageDJIA, -0.48% That’s also nearly double the gains for shares of competitor General Motors Co. GM, -3.06% and a contrast with losses of 1% for Ford Motor Co.F, -0.94%
Other issues: Despite the share run so far this year, this has been a humbling month for Tesla. The stock is down 4.5% in October, an aftershock of news earlier in the month that Tesla delivered only a fraction of the Model 3 sedans that Chief Executive Elon Musk had promised.
At the time, Tesla pinned the slower-than-expected Model 3 production ramp to “bottlenecks” but reiterated there were no fundamental issues with production or the supply chain and that it knew what had to be fixed.
Tesla delivered 220 and produced 260 Model 3s in the third quarter. Musk had said that he would expect to see production increase to 1,500 Model 3s by September, with plans to dial that up to 5,000 by the end of the year and 10,000 a week by the end of 2018.
Tesla will need to produce anywhere from 1,200 to 1,700 Model 3 vehicles a day depending on how long it wants its work week to be to clear the Model 3 backlog (Tesla has said preorders hover around 450,000) and book the associated revenue in the span of 12 months, said Rebecca Lindland, an analyst with Cox Automotive.
“This will be extremely difficult unless they start to get production really rolling,” she said. “A much more likely scenario is two full years to clear the backlog, but will people at the back of the line wait an additional year? I’ll be curious to see what leaks about the abandonment rate of the Model 3 deposits. I know I’m having second thoughts.”
Tesla is also juggling other major initiatives, including the launch of a semi truck next month (a project that got delayed from September) and securing a deal to set up a factory in China, as reported by The Wall Street Journal this week, citing people familiar with the plan.
That’s in addition to other projects Musk is working on, including Space X, a privately held rocket company.
“I am a huge fan of the guy, and I think everyone wins if he succeeds. But there does come a time when even Wall Street will be expecting Tesla to generate profits,” Lindland said.
People are selling their Tesla Model 3 reservations for a 300% profit
Daimler unveils its electric truck weeks ahead of Tesla’s big debut
It’s called the E-Fuso Vision One and can carry 11 tons up to 220 miles before recharging.
Daimler AG
We’ve known for awhile that Tesla was set to unveil its electric truck this fall; it’s currently set for November 16th, after a few delays. But Daimler AG has stolen its thunder by announcing a new heavy-duty electric truck today, another indication that Daimler is increasingly seeing Tesla as one of its main rivals.
The truck is called the E-Fuso Vision One. Bloomberg reports that it can carry up to 11 tons a distance of 220 miles before it needs a recharge. This model is just a prototype, but the company says it can have the truck on sale within four years in the US, Japan and Europe. It’s ideal for shorter trips between cities, rather than cross-country hauling.
Tesla was set to unveil its truck tomorrow, October 26th, but the company delayed the announcement because of Model 3 production issues. It’s worth mentioning that even that date was delayed, as the original reveal was targeted for September. You can bet that Elon Musk is not thrilled with Daimler AG today.
IBM is the latest tech giant to extend parental leave
IBM is giving new mothers and fathers more time off, as well as helping with adoption and surrogacy costs.
Another tech company is upping its parental leave game.
New moms who have given birth since November 2016 are getting their paid parental leave bumped up from 14 to 20 weeks, IBM said in a blog post Wednesday. Paid leave for fathers, partners or adoptive parents is also getting a boost to 12 weeks, up from six.
“As the landscape for working parents changes, it’s abundantly clear that there is no one-size-fits-all approach for the issues faced by parents who are balancing family with outside work every day,” Barbara Brickmeier, IBM vice president of employee benefits, said in a blog post.
IBM is the latest tech company to adopt a more liberal policy toward parental leave at a time when there’s more attention on the impact of having parents stay with their children longer during the early days of their development. In August 2015, Netflix launched paid, unlimited parental leave for a child’s first year. About three months later, Facebook said it would start giving four months of paid parental leave for new parents.
Over time, tech companies have also extended benefits to partners, as well as for families pursuing adoption or surrogacy.
In Europe, the parental leave situation is pretty different. Of 193 countries in the United Nations, the US joins only New Guinea, Suriname and a few South Pacific islands in not having a paid parental leave law. European Union countries offer a minimum of 16 weeks to new mothers. In the UK, mothers get 52 weeks, 39 of which are partially paid. Ireland offers 52 weeks, with 42 weeks partially paid, according to a 2016 study from Glassdoor.
And if you’re not quite at the point of starting a family, companies like Apple, eBay, Google and Intel offer egg freezing benefits.
In addition to upping the length of parental leave, IBM will also reimburse employees up to $20,000 for certain adoption or surrogacy expenses. IBM’s Special Care for Children Assistance Plan will also reimburse workers $50,000 for services for a child who might have mental, developmental or physical disabilities.
In 2015, IBM also rolled out a program where it would ship breast milk home if new mothers had to travel.
Illusory image of genetic changes made by NASA.NASA
NASA’s “Twins Study” experiment undertaken by the agency has found that the space travel could result in genetic changes in humans.
The experiment results show an increase in methylation, the process in which genes turn on and off, during the space travel. The scientists give clues that the experiment has given more information on how human genetics change during the space flight. The research papers are expected to be published in 2018.
Chris Mason, of Weill Cornell Medicine, who is also the Principal Investigator of the Twin Study said, “With this study, we’ve seen thousands and thousands of genes change how they are turned on and turned off. This happens as soon as an astronaut gets into space, and some of the activity persists temporarily upon return to Earth.”
The research was done on NASA’s twin astronauts Scott Kelly and Mark Kelly. Scott Kelly lived in the International Space Station (ISS) for a span of 340 days and returned to earth on March 1, 2016, along with his Russian counterpart Mikhail Korneinko.
NASA has done a number of studies on the astronauts including how their body adjusted to weightlessness, isolation, radiation and the stress of long- duration spaceflight.
Another astronaut Mark Kelly had also made his 14 days space voyage as the Commander of STS-134 space shuttle to the ISS while Scott was commanding the Station. The researchers had collected samples from the twins for their study.
Chris Mason said, “This study represents one of the most comprehensive views of human biology. It really sets the bedrock for understanding molecular risks for space travel as well as ways to potentially protect and fix those genetic changes.”
Extensive studies on human physiology, behavioral health, microbiology/ microbiome, and molecular level studies of genetics were conducted on the twins and from the samples collected from them.
Even the plants grown in microgravity with exposure to different combinations of red, green, blue, white and infrared light are believed to show different aspects of growth and physiology. The genetics of plants along with its living conditions are capable of changing it physically and functionally from its previous generations. This study could help researchers to choose plants for missions which are out of the Earth.
The research is expected to help scientists for their future missions to Mars. The NASA has plans to send humans to Mars by 2030. Private space agency SpaceX has announced to send humans to Mars by 2024 to start a human settlement there. Several other space agencies and private companies have the intention to start settlements in Mars and space.
IBM scientists say radical new ‘in-memory’ computing architecture will speed up computers by 200 times
New architecture to enable ultra-dense, low-power, massively-parallel computing systems optimized for AI
October 25, 2017
(Left) Schematic of conventional von Neumann computer architecture, where the memory and computing units are physically separated. To perform a computational operation and to store the result in the same memory location, data is shuttled back and forth between the memory and the processing unit. (Right) An alternative architecture where the computational operation is performed in the same memory location. (credit: IBM Research)
IBM Research announced Tuesday (Oct. 24, 2017) that its scientists have developed the first “in-memory computing” or “computational memory” computer system architecture, which is expected to yield yield 200x improvements in computer speed and energy efficiency — enabling ultra-dense, low-power, massively parallel computing systems.
Their concept is to use one device (such as phase change memory or PCM*) for both storing and processing information. That design would replace the conventional “von Neumann” computer architecture, used in standard desktop computers, laptops, and cellphones, which splits computation and memory into two different devices. That requires moving data back and forth between memory and the computing unit, making them slower and less energy-efficient.
The researchers used PCM devices made from a germanium antimony telluride alloy, which is stacked and sandwiched between two electrodes. When the scientists apply a tiny electric current to the material, they heat it, which alters its state from amorphous (with a disordered atomic arrangement) to crystalline (with an ordered atomic configuration). The IBM researchers have used the crystallization dynamics to perform computation in memory. (credit: IBM Research)
Especially useful in AI applications
The researchers believe this new prototype technology will enable ultra-dense, low-power, and massively parallel computing systems that are especially useful for AI applications. The researchers tested the new architecture using an unsupervised machine-learning algorithm running on one million phase change memory (PCM) devices, successfully finding temporal correlations in unknown data streams.
“This is an important step forward in our research of the physics of AI, which explores new hardware materials, devices and architectures,” says Evangelos Eleftheriou, PhD, an IBM Fellow and co-author of an open-access paper in the peer-reviewed journal Nature Communications. “As the CMOS scaling laws break down because of technological limits, a radical departure from the processor-memory dichotomy is needed to circumvent the limitations of today’s computers.”
“Memory has so far been viewed as a place where we merely store information, said Abu Sebastian, PhD. exploratory memory and cognitive technologies scientist, IBM Research and lead author of the paper. But in this work, we conclusively show how we can exploit the physics of these memory devices to also perform a rather high-level computational primitive. The result of the computation is also stored in the memory devices, and in this sense the concept is loosely inspired by how the brain computes.” Sebastian also leads a European Research Council funded project on this topic.
* To demonstrate the technology, the authors chose two time-based examples and compared their results with traditional machine-learning methods such as k-means clustering:
Simulated Data: one million binary (0 or 1) random processes organized on a 2D grid based on a 1000 x 1000 pixel, black and white, profile drawing of famed British mathematician Alan Turing. The IBM scientists then made the pixels blink on and off with the same rate, but the black pixels turned on and off in a weakly correlated manner. This means that when a black pixel blinks, there is a slightly higher probability that another black pixel will also blink. The random processes were assigned to a million PCM devices, and a simple learning algorithm was implemented. With each blink, the PCM array learned, and the PCM devices corresponding to the correlated processes went to a high conductance state. In this way, the conductance map of the PCM devices recreates the drawing of Alan Turing.
Real-World Data: actual rainfall data, collected over a period of six months from 270 weather stations across the USA in one hour intervals. If rained within the hour, it was labelled “1” and if it didn’t “0”. Classical k-means clustering and the in-memory computing approach agreed on the classification of 245 out of the 270 weather stations. In-memory computing classified 12 stations as uncorrelated that had been marked correlated by the k-means clustering approach. Similarly, the in-memory computing approach classified 13 stations as correlated that had been marked uncorrelated by k-means clustering.
Abstract of Temporal correlation detection using computational phase-change memory
Conventional computers based on the von Neumann architecture perform computation by repeatedly transferring data between their physically separated processing and memory units. As computation becomes increasingly data centric and the scalability limits in terms of performance and power are being reached, alternative computing paradigms with collocated computation and storage are actively being sought. A fascinating such approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. We present an experimental demonstration using one million phase change memory devices organized to perform a high-level computational primitive by exploiting the crystallization dynamics. Its result is imprinted in the conductance states of the memory devices. The results of using such a computational memory for processing real-world data sets show that this co-existence of computation and storage at the nanometer scale could enable ultra-dense, low-power, and massively-parallel computing systems.