http://www.ctvnews.ca/sci-tech/wax-worm-has-an-appetite-for-plastic-researchers-discover-1.3382571

Wax worm has an appetite for plastic, researchers discover

wax worms biodegrade plasticA plastic bag after being degraded by 10 worms for 30 minutes.

Spanish researchers have discovered that a worm often found in beehives is also capable of breaking down one of the most common forms of plastic.

Research scientist Federica Bertocchini, who works for the Spanish National Research Council, has discovered that wax worms are capable of biodegrading polyethylene, the tough stretchy plastic used to make shopping bags, plastic wrap and other things.

Bertocchini, who is also an amateur beekeeper, made the discovery quite by accident, while working among her bee hives.

She removed the worms, which are also known as honey worms, put them in a plastic bag and tied it up tight while she cleaned off the panels.

When she finished and returned to the worm bags, she found they had eaten holes through the bag and escaped. So she decided to do some investigating.

Bertocchini and a team began experiments exposing the worms to polyethylene plastic to see how efficient they were at breaking it down.

They found that 100 wax worms are able to biodegrade 92 milligrams of polyethylene, or about 0.003 ounces, in 12 hours, “which really is very fast,” says Bertocchini.

According to the research team, the composition of beeswax is similar to that of polyethylene, which may be why the worm has developed a mechanism to dispose of the plastic.

Bertocchini’s team is hoping the worm contains an enzyme that they could then isolate and produce synthetically on an industrial scale.

“In this way, we can begin to successfully eliminate this highly resistant material,” Bertocchini said in a statement.

The team’s research will be published in the next issue of Current Biology.

http://www.androidauthority.com/sennheiser-samsung-will-bring-ambeo-3d-audio-tech-android-766650/

Sennheiser and Samsung will bring Ambeo 3D audio tech to Android

This post was originally published on SoundGuys.com.

Have you ever heard of binaural audio? Don’t feel bad if you haven’t, it’s not exactly a household name. But let’s get you filled in.

Binaural audio is three-dimensional audio. Imagine VR, but for audio. If you close your eyes right now and have a friend move around the room while talking to you, you can pretty well pinpoint where they are in that room even though your eyes are closed. That’s because our ears, along with the help of our brain, work in tandem with each other to detect three things: arrival time, loudness, and timbre.

Read: Best Bluetooth earbuds of 2017

If something happens to the right of you, your right ear will hear it first and then, a split second later, the left ear will hear it. Your brain can detect this, as well as how loud each ear perceived the noise was, and the tiny bits of distortion as the sounds bounces off the surrounding room. Once your brain puts all this information together, which happens almost instantly, we can detect where the sound is coming from.

So why am I telling you all this? Well because if I simply said that Sennheiser, a distinguished German audio company, is working with Samsung to bring its binaural earbuds to Android you wouldn’t know what that means. But now you do!

Let’s be clear here though: you don’t need special earbuds to listen to binaural audio. You can do that here with your own headphones.

The Ambeo earbuds Sennheiser is working to bring to Android are for recording audio. They work by having microphones in the metal mesh that you see in the picture above.

SEE ALSO:

Samsung Galaxy S8 and Galaxy S8 Plus review: Almost to Infinity

6 days ago

Sennheiser announced the Ambeo Smart Surround earbuds at CES at the beginning of this year, but for iPhone only. Now the company wants to share the love with Android users as well. Andreas Sennheiser, the CEO of Sennheiser, told The Korea Herald “We are working (with) Samsung because we need credibility and we are doing this technology collaboration for compatibility.”

Although it isn’t clear yet, these earbuds should work with all Android devices, not just Samsung phones. Are you excited for this type of thing to come to Android?

https://www.cnet.com/news/could-spotify-make-a-smartwatch-fitness-band-or-other-music-wearables/

Spotify could make a smartwatch, fitness band … even earphones

Commentary: A job posting indicates new Spotify hardware is coming. Would it go on ears, wrist, or possibly eyes?

Music-streaming company Spotify is looking to debut dedicated hardware, according to a job posting noticed by tech blogger Dave Zatz. The “Senior Product Manager” would work on a “category-defining product” akin to the Pebble Watch, Amazon Echo and Snap Spectacles, a cached version of the posting reads.

The idea of Spotify-dedicated hardware makes a lot of sense, for more reasons than you might think. In fact, Spotify’s already been dipping its toes into wearable and fitness tech.

Spotify lives on smartwatches already

The Samsung Gear S3 has a Spotify app that streams music and was originally intended to store music locally for offline playback. An updated app for Google’s Android Wear platform was also supposed to offer more independent music streaming and downloading options, but hasn’t arrived yet. The Android Wear Spotify app is pretty basic, and just acts as a remote. But it shows Spotify has been exploring ideas already.

pebblecore.jpg
Pebble Core, the Spotify wearable Pebble never made.

CNET

Pebble almost made a Spotify product

Before Pebble was sold to Fitbit, the smartwatch company was also working on a standalone, GPS-enabled Android device called Pebble Core. Its big feature was supposed to be Spotify, streaming music and also playing back locally stored tracks for workouts. Pebble Core doesn’t exist, but the basic idea still could be interesting.

lifebeam-vi-33.jpg
Lifebeam Vi, AI-enabled fitness headphones that already have some Spotify integration.

Sarah Tew/CNET

Spotify is already exploring music and fitness

The Lifebeam Vi, a pair of fitness heart-rate enabled headphones I’ve been wearing over the last week, connects with Spotify to sync playlists for quick playback during workouts. According to Lifebeam CEO Omri Yoffe, whom I spoke with a couple of weeks ago, the longer-term goal is for Spotify’s tracks to match up with your running pace to match your workout’s intensity. Spotify could be developing that type of idea — music that matches your workout — in a future fitness wearable. It would be a great idea and something that doesn’t exist right now.

Headphone + voice activated music makes sense

Consider Amazon Echo and Google Home. They’re speakers with music-forward functions that make the most of voice commands.Why not a Spotify-type device, or maybe a pair of AirPod-like headphones that could call up playlists automatically? If Spotify’s job listing is looking at voice services, an Echo-alike seems like an obvious direction. But maybe fitness headphones are a good fit, too, if Spotify’s able to develop headphones that could locally store music for runs and use a voice interface to interact.

Spotify glasses? (Please, I hope not)

Spotify’s job listing mentions Spectacles, but I sincerely hope that doesn’t mean music glasses. Instead, it’s likely that the idea of a dedicated wearable, tied into one app, is Spotify’s hardware goal. Much like Spectacles are a perfect extension of Snap, a Spotify band could be a wearable remote for Spotify’s music service. If someone can explain why something for music would be on my eyes, please do.

https://arstechnica.com/gadgets/2017/04/intel-optane-memory-how-to-make-revolutionary-technology-totally-boring/

Intel Optane Memory: How to make revolutionary technology totally boring

One day 3D XPoint could change the world. But not like this it won’t.

Enlarge / Intel Optane Memory. Engineering sample, but we hope it’s the same as retail hardware.

3D XPoint (pronounced “crosspoint,” not “ex-point”) is a promising form of non-volatile memory jointly developed by Intel and Micron. Intel claims that the memory, which it’s branding Optane for commercial products, provides a compelling mix of properties putting it somewhere between DRAM and NAND flash.

The first Optane products are almost here. For certain enterprise workloads, there’s the Intel Optane SSD DC P4800X, a 375GB PCIe card that offers substantially lower latency than comparable flash drives and can boast high numbers of I/O operations per second (IOPS) over a much wider range of workloads than flash. Intel isn’t letting reviewers actually use the P4800X, however; the first testing of the hardware, published earlier this week, was performed remotely using hardware on Intel’s premises.

For the consumer, there’s Intel Optane Memory. It’s an M.2 PCIe stick with a capacity of 16GB ($44) or 32GB ($77), and it should be on sale today. Unlike the P4800X, Intel is letting reviewers get hold of Optane Memory or at least something close to it: the part we received was branded “engineering sample,” with no retail branding or packaging. The astute reader will note that 16 or 32GB isn’t a whole lot of storage. Although the sticks can be used as conventional, if tiny, NVMe SSDs, Intel is positioning them as caches for spinning disks. Pair Optane Memory with a large cheap hard disk, and the promise is that you’ll get SSD-like performance—some of the time, at least—with HDD-like capacity.

Mysterious memory

Detailed descriptions from Intel of how Optane works are still notable by their absence—the company seems to have said more about what Optane isn’t than what it is—but a basic picture is slowly being built from what Intel and Micron have said about the technology. The memory has a kind of three-dimensional (hence “3D”) lattice structure (hence “XPoint”). Stackable layers have wires arranged in either rows or columns, and at the intersection of each row and column is the actual storage element: an unspecified material that can change its resistance to different values. The details of how it does this are unclear; Intel has said it’s not a phase-change material, and it’s different from HP’s memristor tech, but it hasn’t said precisely what it is.

Critically, the resistance change is persistent. Once a cell has had its value set, it’ll continue to hold its value indefinitely, even if system power is removed. While we don’t know how the resistance change works, one thing that we do know is that unlike DRAM, each data cell does not need any transistors, which gives rise to Optane’s next important property: it’s a lot denser than DRAM, with Intel and Micron variously claiming a density improvement of four to ten times.

The value stored in each data cell can also be written and rewritten relatively easily. NAND flash requires a very high voltage to erase each cell, which allows a cell to be written only once (flipping its value from a 1 to a 0) before it needs to be erased again. 3D XPoint cells, by contrast, can have their resistance (and hence stored value) updated between 1 and 0 and back again without needing any erasure step.

Optane and 3D XPoint memory are designed to blur the line between memory and storage. These new consumer drives are really only about storage, though.
Enlarge / Optane and 3D XPoint memory are designed to blur the line between memory and storage. These new consumer drives are really only about storage, though.
Intel

To cope with the high voltages, which slowly damage NAND flash over time, and lack of rewritability, NAND is structured in a very particular way. It’s organized into pages of up to 4,096 bytes, with pages then organized into blocks of up to 512 kilobytes. Reads and writes are performed a page at a time, with erases happening not at page but at block granularity. This creates issues such as “write amplification,” where writing a single byte to a page could require an entire block to be erased and rewritten. Optane, however, can be read and written at (potentially) the granularity of a single bit. Eventually, Intel and Micron plan to offer DIMMs based on Optane to take advantage of this RAM-like granular access.

Being storage-like, Optane Memory doesn’t offer bit-level access—it is organized into 512 byte “sectors” instead—but it nonetheless avoids the extreme write amplification of flash, and Intel claims that it has write endurance that’s perhaps ten times better than flash.

Optane is also a lot cheaper than RAM. While $77 would be a lot to pay for 32GB of NAND flash, it’s much less than you’d expect to pay for that amount of RAM.

In its server board, Intel is using Optane to offer a performance profile that flash doesn’t quite match. Flash SSDs can achieve very high numbers of IOPS, but to do this they tend to need large queue depths; that is to say, they need to have software that issues a whole lot of I/O operations concurrently, so the SSD can service them at least partially in parallel. Some drives need 32, 64, or even 128 I/O operations in flight at the same time to achieve their best numbers. The Optane P4800X can hit very high IOPS numbers without needing these deep queues, and its latency, the time taken to respond to each I/O operation, tends to be much lower than a comparable SSD. For certain kinds of server workload, this can be valuable, even in spite of the price premium that Optane commands over NAND flash.

Hybrids have been done before

In the consumer space, however, the Optane advantage is less obvious. The basic principle of hybrid drives is reasonable enough. Spinning magnetic disks have an enormous advantage in terms of absolute capacity and price per gigabyte, but we’re all familiar with their downside: relatively low transfer speeds and access times that are, in computer terms, epochal. A spinning disk can take tens of milliseconds to perform an I/O operation, orders of magnitudes longer than SSDs can manage. Hybrid drives offer a kind of best of both worlds. The large spinning disk offers abundant capacity for rarely used and performance-insensitive data, and the small SSD acts as a cache, providing lightning fast access to the files that get regularly used.

A number of manufacturers offer hard disks with flash embedded within them, offering an easy one-piece hybrid solution. Intel’s storage controllers built into its motherboard chipsets have also long offered a hybrid disk system, called SRT (“Smart Response Technology”). First introduced with the Sandy Bridge Z68 chipset in 2011, SRT allows more or less arbitrary pairings of SSD and spinning disk to be combined into hybrid disks (Apple’s “Fusion Drives” are conceptually similar but technologically unrelated).

For reasons that aren’t immediately obvious to me, Intel has always kept SRT gated. Naively, one would think that SRT’s widest appeal would be to low- and mid-range systems, where cost constraints make it infeasible to offer large quantities of SSD storage. But Intel feels the opposite. At its debut, it was only offered in the high-end Z68 chipset. In the chipset generations that followed Sandy Bridge, Intel did expand SRT availability to certain lower-end chipsets, but even today, the feature is not universal across the Kaby Lake chipset lineup. The company has five Kaby Lake chipsets, from Z270 at the high end, through Q270, H270, Q250, and B250 at the low end. Only Z270 and Q270 support SRT; the other three chipsets do not.

Typically, these hybrids (whether integrated or using SRT) offer substantial boosts to things like starting Windows and applications, but if your workload is diverse enough, their caching ability is curtailed. SRT is limited to a maximum of 64GB and Optane (currently) to 32GB. If your set of hot, regularly used programs and data fits inside 64 or 32GB then it can all be expected to reside in the cache. But if you were to play a handful of large modern games, the performance will become much more hard disk-like. The cache simply isn’t big enough to hold several 50GB games in their entirety, forcing the system to hit the spinning disk to load them.

The hybrids also tend to do little to improve things like software installation time. Software installers won’t be cached (because in general you only use them once) and so all the reading the installer does (and the subsequent writing of the installed programs back to the disk) operates at hard disk speeds.

From a technical, functional perspective, Optane Memory hybrid drives appear to be substantially identical to SRT hybrid drives before them. The basic setup process is the same: the Optane NVMe stick is paired with a spinning disk as an accelerator. With SRT, the SSD could be configured as either a write-back cache (wherein writes are written to the SSD and only lazily flushed to the HDD at the system’s leisure) or a write-through cache (wherein writes are written to the SSD and HDD in parallel). Write-back mode offers acceleration of writes, as they can operate at near-full SSD speed, but comes with a risk: if the HDD and SSD are separated, the data on the HDD may be missing, stale, or corrupt, because the latest data to be written is found exclusively on the SSD. Write-through mode is slower, since writes can only happen at HDD speed, but means that the HDD always contains a complete, usable, up-to-date copy of all your information.

Optane appears to only offer write-back mode. If you want to split the drives up, you’ll first have to disable Optane through the system firmware or through Intel’s management utility. This flushes the cached data to the spinning disk, bringing it up to date. If you disconnect the hard disk without going through this process, the Optane will be marked as offline.

Intel’s infamous arbitrary limitations

But there is one difference between Optane and SRT that isn’t technical, and that’s compatibility. Unlike SRT, which is restricted only to high-end chipsets, Optane is available to every Intel chipset—just as long as it’s a Kaby Lake 200-series chipset paired with a Kaby Lake (7th generation Core) processor. This means that a chipset such as the low-end B250 will let you create a hybrid out of Optane and a hard disk but won’t let you create a damn-near identical hybrid out of a flash SSD and a hard disk.

There appears to be no particularly good reason for this; it’s simply that Intel was caught by conflicting demands. On the one hand it wants to keep SRT as a “high-end” feature (even though it’s the low-end and mid-range audience that stands to yield the most benefit from SRT). On the other hand, it wants to maximize the potential demand for Optane. And on the gripping hand, it wants to create an extra incentive to upgrade to Kaby Lake, as it would otherwise be only a minor refresh to Skylake; tying a supposedly desirable feature to Kaby Lake (and, eventually, newer CPUs and chipsets) helps create that incentive.

The review system Intel sent us to test Optane uses none other than the B250 chipset. Optane-enabled, certainly, but SRT-disabled. In its press presentation announcing Optane Memory, Intel made plenty of comparisons between an Optane hybrid and a plain HDD, and, naturally Optane looked good. But surely the more relevant, significant comparison would be between an Optane hybrid and a (much cheaper) NAND flash hybrid. Alas, it is not a comparison I can make; I don’t have a Z270 or Q270 motherboard on hand.

One might well wonder why, of all the possible motherboards it could include in its review systems, Intel opted to pick one that made the obvious direct comparison impossible.

It performs like a hybrid disk

The Optane in the review system is paired with a 1TB Western Digital Black drive, with a 7,200 rpm spindle speed. This is a mid-range disk with generally decent performance and, I’d argue, a little better than what one might expect to see used in a bargain-basement B250 system (the 5400RPM WD Blue drives are considerably cheaper).

The Optane hybrid performed in much the way you’d expect of a hybrid disk. Because it’s a cache, the first time you do pretty much anything takes place at hard disk speeds, but, after repeating a task a few times, things settle down to cached Optane speed. The easiest I/O intensive workload that most of us run into from time to time is rebooting Windows, and here the Optane was remarkable. Rebooting from the hard disk alone took an average of about 56 seconds from the moment I hit “reboot” to the moment the desktop appears. With Optane enabled, this eventually settled down at a hair under 20 seconds. That’s a difference that’s very noticeable and very welcome.

Thing is, I’d expect a flash SSD-based hybrid using SRT to offer pretty much the same improvement. Flash SSDs were a lot slower back when Z68 hit the market, but even that first generation of SRT showed huge gains in boot time. I just can’t make the direct comparison because of Intel’s extreme product segmentation.

On a couple of occasions while rebooting, a strange progress screen popped up, too. I don’t know what provoked it exactly, and the picture I captured is not the best (it happened just after the firmware was finished, long before the print screen key does anything useful), but it looks as if something somehow upset the status of the hybrid, and it had to flush the cache or verify its integrity or something.

On a couple of occasions, this message appeared when booting the system. I'm not entirely sure what it's doing, or what these phases are, or why the appearance seemed to be random.
Enlarge / On a couple of occasions, this message appeared when booting the system. I’m not entirely sure what it’s doing, or what these phases are, or why the appearance seemed to be random.

Using CrystalDiskMark (a convenient front-end for Microsoft’s free and open source DiskSpd benchmarking tool) the peculiar performance profile of a hybrid disk became apparent: as the size of the test data grows larger and larger, performance becomes more and more hard disk-like. CrystalDiskMark tops out at 32GB of test data so is only barely enough to overwhelm the 32GB cache Intel supplied, but even this was enough to show how performance degrades when non-cached data is being used. Although the sequential performance remained admirably Optane-like, the random read and write performance fell off substantially. Moving from a 1GB data set (fully cache-able) to a 32GB dataset, random read performance fell from 200MB/s to 46MB/s, though writes held steady.

What’s more, after performing this large test in CrystalDiskMark, the reboot performance suffered—Windows itself was no longer cached so could no longer load quickly. A reboot fixed it, of course.

Running CrystalDiskMark against the raw Optane (no hybrid) reinforced the findings of the P4800X reviews. In terms of sheer sequential read performance, it’s a little behind the 1TB Samsung 960 EVO. In terms of sequential write performance, it’s actually a long way behind the flash SSD (this could be because for some reason it prohibits Windows from enabling write caching; I’m not sure). But the Samsung needs high queue depths to really shine. With a queue depth of 32, the Samsung manages 630MB/s of random read performance. Cut the queue depth down to 1 and that drops to just 54MB/s. The Optane showed a similar 636MB/s with a queue depth of 32, but it only fell to 300MB/s with a queue depth of 1. Random reads with short queue depths—the perfect workload for a cache drive—are clearly a strength of Optane relative to flash.

Narrow audiences

So here’s the thing. The 32GB Optane costs $77. The WD Black hard disk is $73 from Amazon right now. That’s $150 in total. For $139, Amazon is selling a 250GB Samsung 960 EVO. Clearly, 250GB is not as big as 1TB. If you really need the space but you’re really on a tight budget, maybe the hybrid is the way to go. But if 250GB is enough for your needs, the plain SSD is the better bet.

While I can’t use SRT in the B250 motherboard, I can use a regular NVMe SSD. I happen to have a 1TB 960 EVO on hand. The Windows reboot cycle takes about 20 seconds. Difference is, it always takes about 20 seconds. There’s no need to reboot a couple of times to prime the cache. Every read and every write is fast, because there’s no HDD at all, only the SSD. The 1TB model is a little quicker than its 250GB sibling, but the 250GB part isn’t slow. And unlike a hybrid drive—any hybrid drive—the SSD is always fast.

It’s not that the Optane hybrid doesn’t work; it does work. Of course it works. SRT is six years old now, and Optane Memory is basically the same thing. It works fine, and it works in the way I expected it to work. And I could see it making sense in situations where the cost differential is more significant. In fact, my own personal use case could fit; I have two 4TB spinning disks in a mirrored pair. I don’t like messing about with partitions or having Windows on an SSD and other things on spinning rust, because life’s too short to micromanage my storage in that way, and for the time being at least, 4TB of mirrored flash is out of my price range. So everything just goes onto the 4TB disks. Sticking a hybrid accelerator in front of that would make a lot more sense to me.

But I think this is a pretty unusual use case. Most people don’t have that much disk space and don’t need that much disk space. The 250GB SSD is going to provide a better experience (because it’s always fast, rather than only sometimes fast), and it’s going to do it at a lower cost.

And even if a hybrid is really the right option, is an Optane hybrid really going to offer any benefit over a flash SSD hybrid? I’d love if Intel had provided the equipment to answer this definitely, and I’m more than a little suspicious that it didn’t. For $71 on Newegg, I can get a 128GB Intel NVMe SSD. Half of that would be wasted with SRT, because SRT is capped to 64GB of cache, but that’s still twice the cache size for a little less than the $77 for 32GB of Optane. Its random read performance won’t be as good as Optane, sure. Does it matter? Probably not, because it still beats the snot out of a spinning disk.

Optane is certainly a good cache, and I can believe that maybe Optane is a little better as a cache than flash, but I’m not convinced that it’s sufficiently better as to justify spending more money to get an Optane cache rather than an SRT cache, and I’m wholly unconvinced that caching is the right approach for most users.

3D XPoint is interesting technology. One way or another, byte addressable storage and non-volatile RAM are likely to become mainstream, widespread technologies. They may use 3D XPoint; they may use one of the other competing technologies offering comparable properties. They’ll shake up operating system and software design when they do; we might have computers where RAM and the file system are one and the same thing, where even multiterabyte databases can be queried in “storage” rather than having to spool their data through RAM first. Our ultraportable laptops may start packing tens or hundreds of gigabytes of “RAM.” The specifics are hard to predict, but 3D XPoint, or something like 3D XPoint, is sure to open up all sorts of novel computing possibilities.

But Intel Optane Memory? It’s the most uninspiring use of this tech. Rather than showcasing the new capabilities that 3D XPoint brings to the table, it simply highlights how wretched Intel’s product segmentation is. It’s at best an incremental improvement over SRT, and, for the money, most people are probably going to be better off with a plain flash SSD than a hybrid disk anyway. 3D XPoint may yet turn out to be something good, perhaps even something world-changing. But this ain’t it.

https://www.forbes.com/sites/ewanspence/2017/04/24/where-can-i-buy-a-cheap-macbook-pro-with-touch-bar/#2e92d2cc44a2

Apple’s Hidden MacBook Pro With Touch Bar Discount

Apple’s launch last year of the new MacBook Pro range saw a new feature being launched with the Touch Bar, but also a resetting of the price point. This tends to happen with any major shift in the MacBook Pro range. It keeps the margins high and as components become cheaper over the years the price can drop while the profit remains.

It does leave consumers looking at a rather large early adopter ‘tax’ but now there’s a cheaper way to get started with Apple’s new design of MacBook Pro… and it is hidden deep in Apple’s own website.

Apple CEO Tim Cook (R) previews a MacBook Pro during a product launch event at Apple headquarters in Cupertino, California (Photo: Josh Edelson/AFP/Getty Images)

The secret is to head to the refurb section of the online Apple Store. There has been a constant turnover of older Mac machines on this page for many years. The machines generally come with a $200-$300 discount depending on the model, but Apple will still offer a twelve-month warranty and the option to purchase Apple Care for the same price as those who buy their MacBooks new.

This weekend saw the MacBook Pro with Touch Bar arrive in the refurb section in various configurations. The ‘entry-level’ model with an i5 processor, 8 GB or RAM and 256 GB of flash storage is priced at $1529 compared to $1799 for a brand new model. If you bump the RAM up to 16 GB, the macOS-powered laptop is currently available for $1699 compared to the new price of $1999.

Apple’s refurb section is not a ‘built to order’ section and is dependant on stock levels. While the Touch Bar enabled Macs are very much on the cutting edge of macOS hardware, the price has been eye watering for those looking to join in Apple’s slowly evolving concept of the laptop. The addition of these machines to the refurb store (and the associated peace of mind of a full warranty and the option of Apple Care) should help increase the adoption of the new technology.

Apple needs the Touch Bar to become established as quickly as possible. Much like 3D Touch in iOS, unless developers can be confident that the Touch Bar is present it can only be used as a secondary control elements and its potential will remain trapped.

Now read why the MacBook Pro is no longer as popular as it once was…

https://futurism.com/kurzweil-by-2030-nanobots-will-flow-throughout-our-bodies/

Kurzweil: By 2030, Nanobots Will Flow Throughout Our Bodies

Futurist Dave Evans shares his thoughts about the future of human-machine interaction. In an interview with James Bedsole, Evans explained what the thought of Ray Kurzweil’s prediction of nanobots in the body by 2030.

Ray Kurzweil, Google’s director of engineering, is a well-known futurist who seems to have a penchant for accurate predictions. Most recently, he has again reiterated his prediction that the so-called technological singularity will happen by 2045. For Kurzweil, this doesn’t translate to an end-of-the-world-as-we-know-it scenario courtesy of artificially intelligent (AI) machines. Rather, it means human beings will become powered by machines.

Kurzweil believes that, as part of this human-machine melding, nanobots will inhabit our bodies by the 2030s. While flowing through our arteries, these microscopic robots would keep us healthy and transmit our brains onto the cloud.

Another futurist, Dave Evans, founder and CTO of Silicon Valley stealth startup Stringify, gave his thoughts about Kurzweil’s nanobot idea in an interview with James Bedsole on February.

Evans explained that he thinks such a merging of technology and biology isn’t at all farfetched. In fact, he described three stages as to how this will occur: the wearable phase (where we are today), the embeddable phase (where we’re headed, with neural implants and such), and the replaceable phase.

Does Evans agree with Kurzweil’s idea of nanobots flowing inside our bodies? Check out the rest of his answer in the video embedded here.

http://appleinsider.com/articles/17/04/24/on-its-2nd-anniversary-apple-watch-settling-into-role-as-fitness-notification-wearable-with-siri-apple-pay

On its 2nd anniversary, Apple Watch settling into role as fitness & notification wearable with Siri, Apple Pay

Originally pitched as a multitude of things, including an intimate communication tool and new frontier for mobile apps, the Apple Watch has been refined and simplified in the years since its debut, focusing on what Apple has determined to be the fledgling device’s core strengths.

Any new platform and product category takes time to find its footing —the first iPod was Mac-only, the first iPhone shipped without an App Store, and the first iPad required activation through iTunes before it could even be used.

The Apple Watch has been no different.

Finding what works

In the last two years, Apple has quickly learned that the less you need to interact with your wearable device, the better.

Accordingly, Apple has shown a willingness to go back to the drawing board with the Apple Watch, refining what works and eschewing what does not. Those changes have helped sales of the Apple Watch to grow, reaching a new record in the holiday 2016 quarter.

Features that work without much— or any— physical input have proven to be the “killer apps” for Apple Watch. That includes handsfree “Hey Siri,” double clicking the side button for Apple Pay, and quick glances for iPhone notifications.

The Apple Watch also works well as a fitness and health device because, again, it handles most of the capabilities on its own. Closing out Activity rings is an automatic process that resets itself every day. Heart rate is automatically measured and saved to the Health app throughout the day.

 

Hardware pivots to more health, less fashion

Though the Apple Watch remains largely the same externally, Apple has made some tweaks on that front too. While the first-generation “Edition” model came in gold and was priced over $10,000, Apple reversed course with the second-generation model last September, switching to a more affordable white ceramic version that carries a starting price of $1,249.

Apple has also partnered with Nike on specialized versions of its watch, though the hardware is essentially identical to the standard Series 2 models. This week, the partnership with Nike will expand with a new NikeLab model set to go on sale Thursday with a new watch band color.

The partnership with Nike, in some ways, summarizes where the Apple Watch has succeeded, and where the original vision fell short.

Inside, Apple made the device far more capable with the Series 2 model thanks to a new dual-core S2 processor. Once again, this addresses some of the slowness with the original model’s hardware and software. The focus is yet again a device that is both faster and easier to use.

 

Rapid revisions and progress

The Apple Watch was first announced in September of 2014, but didn’t find its way onto the wrists of consumers until April 24 of 2015. While launch-day hardware is still supported two years later, the software has changed considerably.

At launch, third-party apps for Apple Watch did not run natively, instead offloading the processing requirements to a connected iPhone. This led to apps that loaded extremely slow —something Apple rectified quickly, adding support for native apps with watchOS 2, and mandating apps go native by June of 2016.

The platform was shaken up even further last September with the launch of watchOS 3, which completely changed the functionality of the hardware side button. While previously the button was used for sending Digital Touch scribbles to contacts, Apple eventually realized that feature was not widely used, let alone worth the justification of a dedicated button.

With watchOS 3, the Apple Watch side button changed to link to a new app dock, allowing quick access— and background loading— of frequently used apps. The new dock also brought about the end of a swipe-up “Glances” view, instead brining a familiar Control Center for that gesture.

Third-party apps on the Apple Watch simply have not taken off in the way they did on iOS. A wearable display is too small for most apps to be useful.

The honeycomb home screen of apps gets the job done, but it’s not quick or easy to use, leading to the app dock and a push for more watch face complications in watchOS 3. The core focus for Apple since the first watch launched has been speed— both software and hardware updates have emphasized a device that not only runs faster, but allows users to access what they need faster.

 

Apple Watch, year 3 and beyond

As the Apple Watch passes its second birthday, expect a new update this fall that could add an LTE radio for independent wireless connectivity. Undoubtedly a new model will also be faster, and offer equal-or-better battery life.

Little is known about a hypothetical “watchOS 4,” but if the Apple Watch does gain cellular capabilities, it’s likely that the platform will start to become less dependent on a tethered iPhone.

Beyond speed and connectivity, other logical hardware upgrades would include an always-on display, battery life that gets well over a day’s worth of use, and thinner and lighter designs. It’s unclear if and when the technological advances needed for those kinds of breakthroughs might be coming down the pike.

Some users have also clamored for new form factors for the Apple Watch —specifically, a round display option in addition to the current square panel. Doing so could help the device appeal to customers who prefer traditional, round watch faces.

However, there haven’t been any rumors of a forthcoming round Apple Watch. And given Apple’s migration away from fashion and toward health, fitness, and ease of use, it seems unlikely that a round-screened model might arrive anytime soon.

As the Apple Watch has evolved over its first 24 months, Apple’s focus has remained on simplicity and speed. Expect future updates to make it even easier and more convenient to interact with the device— helping to make it an even more integral part of Apple’s ecosystem.

http://www.kurzweilai.net/elon-musk-wants-to-enhance-us-as-superhuman-cyborgs-to-deal-with-superintelligent-ai

Elon Musk wants to enhance us as superhuman cyborgs to deal with superintelligent AI

April 21, 2017

(credit: Neuralink Corp.)

It’s the year 2021. A quadriplegic patient has just had one million “neural lace” microparticles injected into her brain, the world’s first human with an internet communication system using a wireless implanted brain-mind interface — and empowering her as the first superhuman cyborg. …

No, this is not a science-fiction movie plot. It’s the actual first public step — just four years from now — in Tesla CEO Elon Musk’s business plan for his latest new venture, Neuralink. It’s now explained for the first time on Tim Urban’s WaitButWhy blog.

Dealing with the superintelligence existential risk

Such a system would allow for radically improved communication between people, Musk believes. But for Musk, the big concern is AI safety. “AI is obviously going to surpass human intelligence by a lot,” he says. “There’s some risk at that point that something bad happens, something that we can’t control, that humanity can’t control after that point — either a small group of people monopolize AI power, or the AI goes rogue, or something like that.”

“This is what keeps Elon up at night,” says Urban. “He sees it as only a matter of time before superintelligent AI rises up on this planet — and when that happens, he believes that it’s critical that we don’t end up as part of ‘everyone else.’ That’s why, in a future world made up of AI and everyone else, he thinks we have only one good option: To be AI.”

Neural dust: an ultrasonic, low power solution for chronic brain-machine interfaces (credit: Swarm Lab/UC Berkeley)

To achieve his, Neuralink CEO Musk has met with more than 1,000 people, narrowing it down initially to eight experts, such as Paul Merolla, who spent the last seven years as the lead chip designer at IBM on their DARPA-funded SyNAPSE program to design neuromorphic (brain-inspired) chips with 5.4 billion transistors (each with 1 million neurons and 256 million synapses), and Dongjin (DJ) Seo, who while at UC Berkeley designed an ultrasonic backscatter system for powering and communicating with implanted bioelectronics called neural dust for recording brain activity.*

Mesh electronics being injected through sub-100 micrometer inner diameter glass needle into aqueous solution (credit: Lieber Research Group, Harvard University)

Becoming one with AI — a good thing?

Neuralink’s goal its to create a “digital tertiary layer” to augment the brain’s current cortex and limbic layers — a radical high-bandwidth, long-lasting, biocompatible, bidirectional communicative, non-invasively implanted system made up of micron-size (millionth of a meter) particles communicating wirelessly via the cloud and internet to achieve super-fast communication speed and increased bandwidth (carrying more information).

“We’re going to have the choice of either being left behind and being effectively useless or like a pet — you know, like a house cat or something — or eventually figuring out some way to be symbiotic and merge with AI. … A house cat’s a good outcome, by the way.”

Thin, flexible electrodes mounted on top of a biodegradable silk substrate could provide a better brain-machine interface, as shown in this model. (credit: University of Illinois at Urbana-Champaign)

But machine intelligence is already vastly superior to human intelligence in specific areas (such as Google’s Alpha Go) and often inexplicable. So how do we know superintelligence has the best interests of humanity in mind?

“Just an engineering problem”

Musk’s answer: “If we achieve tight symbiosis, the AI wouldn’t be ‘other’  — it would be you and with a relationship to your cortex analogous to the relationship your cortex has with your limbic system.” OK, but then how does an inferior intelligence know when it’s achieved full symbiosis with a superior one — or when AI goes rogue?

Brain-to-brain (B2B) internet communication system: EEG signals representing two words were encoded into binary strings (left) by the sender (emitter) and sent via the internet to a receiver. The signal was then encoded as a series of transcranial magnetic stimulation-generated phosphenes detected by the visual occipital cortex, which the receiver then translated to words (credit: Carles Grau et al./PLoS ONE)

And what about experts in neuroethics, psychology, law? Musk says it’s just “an engineering problem. … If we can just use engineering to get neurons to talk to computers, we’ll have done our job, and machine learning can do much of the rest.”

However, it’s not clear how we could be assured our brains aren’t hacked, spied on, and controlled by a repressive government or by other humans — especially those with a more recently updated software version or covert cyborg hardware improvements.

NIRS/EEG brain-computer interface system using non-invasive near-infrared light for sensing “yes” or “no” thoughts, shown on a model (credit: Wyss Center for Bio and Neuroengineering)

In addition, the devices mentioned in WaitButWhy all require some form of neurosurgery, unlike Facebook’s researchproject to use non-invasive near-infrared light, as shown in this experiment, for example.** And getting implants for non-medical use approved by the FDA will be a challenge, to grossly understate it.

“I think we are about 8 to 10 years away from this being usable by people with no disability,” says Musk, optimistically. However, Musk does not lay out a technology roadmap for going further, as MIT Technology Review notes.

Nonetheless, Neuralink sounds awesome — it should lead to some exciting neuroscience breakthroughs. And Neuralink now has 16 San Francisco job listings here.

* Other experts: Vanessa Tolosa, Lawrence Livermore National Laboratory, one of the world’s foremost researchers on biocompatible materials; Max Hodak, who worked on the development of some groundbreaking BMI technology at Miguel Nicolelis’s lab at Duke University, Ben Rapoport, Neuralink’s neurosurgery expert, with a Ph.D. in Electrical Engineering and Computer Science from MIT; Tim Hanson, UC Berkeley post-doc and expert in flexible Electrodes for Stable, Minimally-Invasive Neural Recording; Flip Sabes, professor, UCSF School of Medicine expert in cortical physiology, computational and theoretical modeling, and human psychophysics and physiology; and Tim Gardner, Associate Professor of Biology at Boston University, whose lab works on implanting BMIs in birds, to study “how complex songs are assembled from elementary neural units” and learn about “the relationships between patterns of neural activity on different time scales.”

** This binary experiment and the binary Brain-to-brain (B2B) internet communication system mentioned above are the equivalents of the first binary (dot–dash) telegraph message, sent May 24, 1844: ”What hath God wrought?”

http://www.postandcourier.com/business/how-to-listen-to-everything-amazon-echo-has-ever-heard/article_5b16b66c-2503-11e7-8aff-b7ab48c660ba.html

How to listen to everything Amazon Echo has ever heard

If you own an Amazon Echo, you probably know its strange secret. The device records a lot of what you say. Deep inside that dark tower, Echo keeps a vast trove of recordings. Your voice is preserved. Your friends’ voices are preserved. Anyone who has ever been to your house and said, “Alexa!” has contributed to its great library of human sound.

On the upside, this amazing technology puts instant information a voice command away. Most people have no idea that you can do much more than get the latest weather or listen to your favorite tunes. Go to http://tinyurl.com/gugnkht for a list of Alexa commands that you’re probably not using but should.

The downside is that Amazon stores an audio recording of every voice command you’ve issued to Alexa, not just in the device itself, but on Amazon’s servers.

Most owners feel a little weird about these voice recordings. What does Amazon plan to do with what I say? Will someone break into Alexa and hack my voice? Can law enforcement access my recordings? Is Amazon going to use these sounds files for some dastardly plan?

First, let’s address why the device stores your voice in the first place. In brief, Alexa wants to obey your every command. But no matter how lifelike Alexa may be, you are still a human being talking to a machine, which has no intuition.

Tip within a tip: Go to http://tinyurl.com/m5r8vu4 for ways to control your home with Amazon Alexa and your voice.

For the software to learn, it must adapt to your style. Alexa is designed to figure out your particular style of speaking. Some people mumble, and others have thick accents. Gradually, Echo’s technology takes this into account and gets better at understanding you.

Is Echo always listening? The short answer is yes. Alexa is activated when it detects one of its wake words, which are “Alexa,” “Amazon,” “Computer,” or “Echo.” You’ll know that the device is ready for a command when the outer ring at the top glows blue.

According to Amazon, a fraction of a second of audio before the wake word is stored along with each recording. So if you’ve having a conversation and say something like, “I love that song! Let’s listen to it. Alexa, play the Coldplay song, ‘Viva La Vida,’” then Alexa may keep the words “listen to it.”

Listen

Surprise! You can access all those recordings and listen to every command you’ve ever given since first installing the Echo in your house. Crazy, right?

I’m guessing that most people don’t realize they have this ability. But indeed, you can review your voice log with the Alexa app on iOS and Android and the app also allows you to scroll through your activity and listen to each recording.

I was surprised that some of my recordings had nothing to do with giving Alexa commands. There was me talking on my phone about my old studios that I was selling. Alexa also recorded portions of a presidential television debate. I am not sure why my real estate phone call was recorded. But one of the candidates almost said the word “Alexa.”

If you’d like to review your old recordings, pull up the Alexa app and visit the History section of the Settings menu. Tap on the entry you’d like to review in greater detail and tap the Play icon to listen to the recording. Given the hundreds or thousands of commands most Echo users accumulate, you have a chronicle of requests spoken in your actual voice.

How to delete

If the thought of your recorded requests and other things you might have said stored in a database creeps you out, you can delete them. You need to remove the associated entry of each recording on the Alexa app. Here are the steps to delete recordings:

  • Open the Alexa app and go into the Settings section.
  • Select History and you’ll see a list of all the entries.
  • Select an entry and tap the Delete button.

But what happens if you want to delete all those recordings? Do you have to find each one and manually remove it? That could take days.

Amazon allows you remove everything with one click. Just visit the “Manage Your Content and Devices” at www.amazon.com/mycd.

Keep in mind that Amazon warns, “deleting voice recordings may degrade your Alexa experience.”

Keep it from listening

The gravity of this listening reality was brought to light in the murder trial of a man named James Bates. In February 2016, James Bates was officially charged with the murder of Victor Collins. Amazon was served a warrant requesting the audio files from Bates’s Amazon Echo. But Amazon fought back.

Amazon believed complying with such a warrant could violate consumers’ rights to privacy. Why then, did Amazon ultimately hand over the audio files? In an unexpected twist, Bates provided consent for the police to review the Echo’s data. Because of this abrupt change, we’ll never know how Amazon would have defended themselves in their fight to protect their customers’ private data. Go to http://tinyurl.com/k3mzj3a for additional details about the murder case and Amazon’s actions.

If you want to prevent Echo from listening to you, switch off Echo’s microphone. There is an on/off button on the top of the device, and whenever the button is red, the mic is off. To reactivate it, just press the button again.

Muting the mic will stop Echo from listening. However, disabling the mic will also defeat the purpose of these virtual assistants. Yes, Echo provides some pretty great sound quality, especially for a Bluetooth speaker, but you won’t be able to deliver commands.

So even if you love your Echo, allow yourself a bit of privacy. Alexa is great to have around, but sometimes, three’s a crowd.

http://www.livemint.com/Industry/M4mOPWRf0L9cSH0OBWpniL/Apple-selfdriving-car-testing-plan-gives-clues-to-tech-prog.html

Apple self-driving car testing plan gives clues to tech programme

Apple included a 10-page training plan that appeared to be related to operators taking back manual control of the self-driving car during automated exercises of the system

San Francisco: Apple Inc. outlined a plan to train operators of self-driving cars in documents submitted to California regulators earlier this month, the latest clues to the company’s autonomous vehicle technology aspirations.

Apple was granted a permit to test self-driving cars on 14 April by the California department of motor vehicles but the company has never said anything about its plan.

The state released 41 pages of Apple application documents to Reuters that give some clues about the company’s highly secret self-driving effort, which it has never openly acknowledged.

The iPhone maker joins a long list of carmakers, start-ups and technology rivals, including Alphabet’s Waymo, that are testing cars on state roads. Apple is looking for new hit products and autonomous car technology is expected to revolutionize the traditional auto industry.

As part of the application, Apple included a 10-page training plan that appeared to be related to operators taking back manual control of the car during automated driving exercises of the system, which it calls a development platform.

Apple declined comment beyond the filing.

The plan includes a document called “Automated System: Development Platform Specific Training Overview” whose objective is “to train safety drivers in various automated driving conditions”. “Development platform will be controlled electronically (e.g. joystick) and safety drivers must be ready to intervene and take control,” the document reads.

The document highlights different scenarios to be tested, from high speed driving and tight U-turns to lane changes. One letter sent from Apple to the state department of motor vehicles noted that Apple’s development platform “will have the ability to capture and store relevant data before a collision occurs”.

The document does not include detail on how Apple’s self-driving platform actually works or other technical details. It also does not say what kind of sensors are found on Apple’s three permitted vehicles, all 2015 Lexus model RX450h. The permit does not necessarily mean that Apple itself is building a full car. Apple could instead be designing a self-driving platform that can be integrated into other manufacturer’s cars. Reuters