http://www.autoblog.com/2017/04/19/world-of-watson-enters-your-car/

The World of Watson enters your car

IBM’s Bret Greenstein explains how automobiles will find their voice.

After experiencing all of the varied communication systems – visual, audio, written, spoken, tactile, light-emitted, emotionally responsive – embedded into the Toyota Concept-i unveiled at CES this year, we came to the conclusion that we may have entered an automotive Age of Babel. As we spend more time in our cars, and as we become accustomed to always having fingertip access to the world’s fount of knowledge (and crowd-sourced falafel recommendations), expectations for what our vehicles should be able to do have outpaced our capacity to fluidly and safely process them.

We’re not the only ones who feel this way. General Motors, Ford, and BMW agree that new modes of information organization are required. That’s why they’re partnering with IBM, among others, to work solutions based on artificial intelligence (AI) systems that can interpret and respond to natural voice-based commands. IBM has deep roots in this protocol, having been developing its proprietary system, Watson, since early in this century. (Remember when it competed, and won on Jeopardy!?)

Why is this the way to go? Because, hype and hyperbole aside, cars can’t drive themselves, and they probably won’t be able to in any meaningful way for some time. And natural speech processing is probably the only input and output mechanism that affords the opportunity to process the quantity of information, and offer the kind of assistance, that occupants currently desire – without looking away from the road.

Bret Greenstein, IBM’s vice president for Watson Internet of Things, and 28-year veteran of the company, concurs. “I think in cars, driver augmentation is certainly more practical than self-driving in the short term,” he says. “Though I think it’s inevitable that there will be a moment when it is safer for a computer to drive than a human.”

In the nebulous interim before the future arrives, how will a natural voice-activated driver augmentation system function, and how does it work? The basic idea is to teach (and keep teaching) a computer how to recognize meaning in the spoken word. This is defined as not just the capacity for the machine to comprehend and process a person’s individual words, and what they represent when strung together. It also involves providing a means for the AI to recognize and respond to deeper context.

“Most people focus on the tech, and that misses the point,” Greenstein says. “AI is really much more about understanding people’s intent, goals, and objectives, which requires not just listening for commands for the car to do this and that.”

US-TRANSPORT-OLLIE

“The goal is to make Watson comfortable, informative, and not invasive. But part of that is deciding what your purpose is.” – Greenstein

Greenstein points to his lab’s work on the deciphering of higher-level speech-embedded meaning – like that affiliated with personality, tone, style, and emotion – and tailoring an individualized AI interface that takes all of this into account. “We have a service called Watson Tone Analyzer that’s based on choice of words,” he says. “Some people are assertive or aggressive in their tone, others are more laid-back, and they want a user experience based on that.”

Similarly, Watson has another service called Personality Insights, which can decode and respond to your word choice, the speed or volume of your voice, or your level of intensity.

“The goal is to make Watson comfortable, informative, and not invasive. But part of that is deciding what your purpose is,” Greenstein says. “If you’re a security alarm, you have to be urgent and quick. In other settings, you want something calmer.”

And herein lies the greatest challenge of a system like this. Automotive designers have been working in the interface paradigm of buttons, screens, and touch for decades. The design of voice-based systems does not necessarily draw on the same skills. “There’s an elegance to a properly designed voice interface, an efficiency of feedback,” said Greenstein about the softer more anthropological science of dialogue. “Not long drawn-out responses, but the ability to provide alerts and notifications without being intrusive, while being informative at the right time.”

Expect to see Watson rolling into dealerships, or “mobility” services, very soon. “With Local Motors, they have certain commercial deals that they’re working on, certain products, so we’ll see it in Local Motors this year.” Greenstein says. “We have a formal relationship with BMW for dealing with a cognitive assistant in a vehicle. And [GM CEO] Mary Barra announced that Watson will be in GM vehicles with OnStar in 2017 where it will act in addition to the OnStar operator.”

Like many current infotainment systems, the cognitive assistant will operate via cellular data services. But the cars that feature them will include multiple antennas. Greenstein says that the upcoming BMW integration he mentioned will feature six or seven different cell radios, necessary since most of the data exists in the cloud.

“As coverage gets better, but not perfect, there are still gaps. But just like GPS systems, there’s some data that’s cached and useful. And there are other things where you must be live, so it’s inconvenient to be out of range,” Greenstein says. Simple tasks like using a voice command to change your radio station can be accomplished without cell access. But most of the systems’ value comes from context and other data. Since advanced driver assistance features now rolling out require constant connectivity to function properly, Greenstein feels confident that any gaps will soon be filled.

For Greenstein, these challenges – and those of information overload – feel surmountable. This is based in part on his decades-long work with a tech-forward company like IBM, and thus his understanding that our ability to relate to technology is contextual and transient, and subject to change.

“We all have a different tolerance for this stuff,” he says. “And we’re all adjusting. Two years ago, or four years ago, you looked like an idiot talking to your house to try and turn on the lights. But now, it’s fine.”

http://www.businessinsider.com/tesla-model-3-design-different-chevy-bolt-2017-4

The designs of the Tesla Model 3 and the Chevy Bolt are completely different

Chevrolet Bolt 6The sensible Bolt.Hollis Johnson

The Tesla Model 3 and the Chevy Bolt are both long-range, all-electric cars for the people — each is about $35,000 to $37,000 before tax credits — but they embody very different design philosophies.

The Model 3, expected to launch later this year, is the handiwork of Tesla’s chief designer, Franz von Holzhausen. When it was unveiled in March 2016, it set a slightly new direction for the carmaker. The front fascia, for example, lacked any conventional automotive cues, such as a grille — an unnecessary element, of course, because Teslas don’t need to inhale air to burn gasoline.

The Tesla Model S and Model X would adopt this new language. In any case, the Model 3 continued Tesla’s tradition of making its cars look sleek, fast, and sexy. The Model 3 might be for the mass market, but it evokes the luxury EVs that Tesla is selling.

Tesla Model 3The sexy Model 3.REUTERS/Joe White/File Photo

The Bolt is something else altogether: an electric car that aims for practicality over sex appeal, while still serving up some tasty performance. (You can read our review here.) Designer Stuart Norris and the South Korea-based studio made sure the Bolt was roomy inside and provided good cargo capacity.

The Bolt looks much more like an everyday five-door hatchback. In this sense, it’s a throwback to some earlier ideas about alternative-fuel vehicles. Think of the Toyota Prius when it arrived — nobody would have called it beautiful. Its appearance advertised its virtue.

It’s sexy versus sensible, then, when you put the Bolt and the Model 3 side by side. (By the way, we’re talking about the preproduction version of the Model 3. The real deal won’t enter our field of vision for a few more months.)

Here’s an annotated examination of both vehicles.

View As: One Page Slides

 

BI Graphics

BI Graphics

https://www.engadget.com/2017/04/20/amazon-echo-google-g-suite-calendar-support/

Amazon’s Echo can manage your Google calendar for work

That’s a trick that even Google Home can’t do yet.
Engadget

The Echo has been out on the market for much longer than Google Home, but we’re still surprised about the latest trick that Amazon’s voice-controlled speaker just picked up. As of today, you can integrate your professional G Suite calendar with the Echo. Once you set up your G Suite account in the Echo app, you can ask Alexa to add events to your calendar or read your agenda. Alexa also already works with Gmail, Outlook and Office 365 calendars as well.

The surprising thing here is that Google Home doesn’t yet work with G Suite accounts, nor does it even let you add events to your Google Calendar. That gives the Echo a distinct advantage over Google Home, at least in terms of how it manages your daily info. Home can read out your daily agenda, so it’s not a complete bust, but being able to add items to your calendar feels like the kind of that that should have been in there on day one.

The good news is that Google’s been adding features to Home pretty rapidly — earlier today, the company released an update that lets multiple users add their own accounts to Home so they could get personalized daily agendas or commuting details. Hopefully Google will add more robust calendar features soon, but in the meantime, the Echo maintains a leg up on Home in that regard.

http://www.theverge.com/2017/4/20/15372574/lg-ultrafine-5k-review-macbook-pro-apple-display-monitor-usb-c

LG ULTRAFINE 5K REVIEW: 14.7 MILLION PIXELS CAN’T BE WRONG

Apple finally refreshed its new MacBook Pros last fall, but while the laptops received their first new design in years, the long-suffering Apple Cinema Display didn’t even get a mention. Instead, buyers were pointed toward the new LG UltraFine 5K as the best monitor for their new USB-C laptops.

And after a few days using the $1,299 IPS display, I’m inclined to agree that when it comes to Apple’s latest laptops, the UltraFine 5K is a great option. But much like the updated MacBook Pros, the UltraFine 5K fits a much narrower use case than other more full-featured displays.


First things first: when it comes to the actual visual quality of the screen, the UltraFine 5K is perhaps one of the single best monitors I’ve ever used. Nothing I’ve seen can match it for color, brightness, and clarity. The display puts the already excellent display on the new MacBook Pro to shame, and going back to my old Retina MacBook Pro (circa 2013) felt like looking through a dusty window.

The UltraFine uses the same clever trick as Apple’s Retina displays — instead of running everything at the impossible tiny native 5120 x 2880 pixel resolution, the UltraFine 5K by default renders scales things up to more standard resolutions of, say, 2560 x 1440 when it comes to visual elements of things like fonts and icons, and using the extra pixels to render things twice as sharp. And much like it did when the iPhone 4 first debuted the Retina display, it’s a wonderful experience, with text rendered crisp even at the smallest font sizes and images appearing vibrant and clear. Between the high resolution and wide display, the UltraFine 5K is a multitasker’s dream.

From a design perspective, the 27-inch UltraFine 5K Display is virtually identical to LG’s smaller 4K 21.5-inch panel (which runs for a comparatively cheap $699.95). That means you’re getting the same uninspired black plastic case and bezels, and an overall bland design the could charitably be called “professional.” For all that Apple touts that it worked closely alongside LG to ensure that the UltraFine 5K would be the perfect monitor for its new USB-C MacBook Pros, that partnership hasn’t extended to the design, with the UltraFine tends to blend into the background among the rows of pedestrian Dell displays that line my co-workers’ desks.

That said, materials aside, the mechanical parts of the UltraFine 5K are wonderfully put together. Adjusting the screen is silky-smooth, with the perfect amount of resistance when it comes to sliding the display to a welcome range of heights. And ultimately, while Apple’s aluminum chassis on the now-defunct Thunderbolt Display may be nicer from an aesthetic perspective, functionally speaking it’s not strictly necessary when it comes to the actual use of the screen.

The display also features a built-in webcam, which offers a bit better visual quality than the built-in one on the new MacBook Pro and some integrated speakers, which at least provide some louder sound than the computer’s speakers, if nothing else.

But while Apple’s hand may not have been in play from an industrial design perspective, its vision for the future of computing is all over the UltraFine 5K Display. Unlike other desktop displays, which tend to offer a wide range of options to hook up computers, ranging from DVI to HDMI to DisplayPort, the UltraFine 5K has exactly one choice: USB-C Thunderbolt 3. And due to the confusing nature of the USB-C standard, that means that, unlike the smaller 4K UltraFine panel, not every laptop with a USB-C port will cut it here. So, for example, Apple’s new MacBook Pros are in, but the 12-inch MacBook is out.

Even if you have a Thunderbolt 2 to Thunderbolt 3 adapter, you’ll be limited to only specific, recent models of Apple’s desktops and laptops, which also can only run at a maximum 4K resolution of 3840 x 2160 instead of the full 5120 x 2880 that the newest MacBook Pros can power it. I tested the UltraFine 5K the non-Touch Bar 13-inch model, arguably the least powerful MacBook Pro that still supports 5K. Performance was fine for the most part, with some minor slowdowns during visually intensive Exposé commands. The bigger issue was 4K video, which tended to stutter on YouTube, something with probably has to do with the MacBook rather than the screen, but was still disappointing to see from a $1,500 laptop.

Like the MacBook Pro models that it was intended for, the UltraFine 5K is similarly anemic when it comes to ports. The single USB-C connection that drives the display also provides power to a connected laptop (up to 85W) and allows the computer to take advantage of the extra ports on the back of the display as a USB hub. But Apple’s single-minded devotion to USB-C shines through here as well, with the UltraFine 5K only offering three extra USB-C ports. On the one hand it’s understandable given Apple’s aggressive pushing of the format, but on the other I couldn’t help but be frustrated by the addition of three more ports that required the same dongles and adapters to connect a flash drive or SD card.

Setup of the UltraFine is meant to be plug-and-play — simply connect the display to powered, and connect your computer via USB-C, and you’re good to go. Or, at least in theory. In practice, I spent several minutes trying to figure out why I had been presented with a blank screen, before restarting several times and eventually updating macOS on the MacBook Pro I was using, at which point things actually were plug-and-play.

I also tested the UltraFine with a few Windows computers, as well. The monitor works out of the box with anything that supports Thunderbolt 3, but only the most recent of laptops were able to drive it at the full 5K resolution, with the rest defaulting to 4K. Additionally, while the Windows PCs were able to work with the display without any issues, there was no way to adjust the brightness without installing LG’s drivers first (which is more important than you’d think, given that the screen is almost blindingly bright at the maximum setting of 500 nits).

And to briefly address the router-shaped elephant in the room, the original shipments of of the UltraFine 5K display infamously suffered from issues where the screen would completely cease to function if placed within roughly six feet of a router, which many computers tend to be. I am pleased to confirm that based on my testing, LG does seem to have fixed that issue, and I experienced no problems with the 5K UltraFine cutting out.

 Photo by Amelia Holowaty Krales / The Verge

At $1,299, the LG UltraFine 5K display isn’t cheap, costing almost as much as some of the computers it’s designed for, and suffers from a major lack of connectivity options, both for input and output, that make it incredibly limited in terms of compatibility. And while the Ultrafine may be the best accessory for your MacBook Pro today, it’s worth keeping in mind that Apple already has announced its plans to get back into the pro display game next year. But if you’re willing to work within the narrow scope that Apple has imposed the UltraFine 5K (and are willing to pay for it), then you can be safe in knowing that you’re getting one of the best displays on the market.

LG ULTRAFINE 5K

8.0VERGE SCORE

GOOD STUFF:

  • Stunning screen
  • Simple design
  • Seriously, the screen is so good

BAD STUFF:

  • Only works over USB-C
  • No legacy ports
  • Makes going back to lesser screens a challenge

http://www.zdnet.com/article/raspberry-pi-challenger-asus-tinker-board-hits-us-faster-more-memory-60/

Raspberry Pi challenger Asus Tinker Board hits US: Faster, more memory, $60

US hardware enthusiasts can now buy Asus’ high-performance alternative to the Raspberry Pi.

sc14363-40.jpg
The Tinker Board is nearly twice the price of the $35 Raspberry Pi 3, but ASUS has focused on higher performance features.

Image: CPC

Computer maker Asus’ recently-launched Tinker Board for hardware enthusiasts is now available for purchase in the US.

In January, the Taiwanese tech company took the wraps off the Tinker Board, which has been available in Europe but until now not the US. Just like the Raspberry Pi 3 and other boards, Asus’ Tinker Board is a mini computer without a power supply, keyboard, mouse, and display.

TECH PRO RESEARCH

Available on Amazon for $60, the Tinker Board is nearly double the price of the $35 Raspberry Pi 3. However, ASUS has focused on higher performance features, equipping it with twice as much RAM, and support for 4K playback.

The Tinker Board’s core features include a quad-core 1.8GHz ARM processor from Rockchip, with a Mali-T764 GPU, and 2GB of DDR3 memory. That spec contrasts with the Raspberry Pi 3’s Broadcom SoC with a 1.2GHz quad-core ARM Cortex A53 CPU, and Broadcom Video Core IV GPU.

Interfaces include four USB 2.0 ports, a 3.5mm audio jack with 192K/24bit audio, a CSI port for cameras, a DSI port, HDMI 2.0 port, and MicroSD port. Wireless support includes 802.11 b/g/n, and Bluetooth 4.0.

It also features a 40-pin header for expansions with 28 general-purpose input output (GPIO) pins, allowing the board to control other hardware for modding projects.

As ZDNet sister site TechRepublic previously reported, the Tinker Board outdoes the Raspberry Pi’s hardware specs, but it’s hard to beat the Rasperry Pi’s ecosystem of software and community support.

The Tinker Board has similar dimensions to the Raspberry Pi, which allows developers to use the same cases available for the Pi board.

The only officially supported OS for the ASUS board is the Debian-based TinkerOS. Asus has provided an FAQ for more details about media-player support, video playback, and browser support.

Asus says it is working with media-player software maker KODI to support media playback hardware acceleration.

While Asus says the Tinker Board can be used to stream Netflix, it can only be used to view 4K videos that have been downloaded or created with H.264 or H.265 encoding. As Ars Technica previously reported, only Intel Kaby Lake processors and Windows 10 PCs have the necessary DRM decoding hardware and software to support streaming Netflix in 4K.

READ MORE ON MINI COMPUTERS

https://www.macrumors.com/2017/04/20/apple-nike-apple-watch-nikelab/

Apple and Nike Launch New Neutral-Toned ‘Apple Watch NikeLab’

Nike today announced that it has teamed up with Apple to create a new version of the Apple Watch Nike+, which pairs a Space Gray Apple Watch Series 2 aluminum case with a black and cream Nike band.

Called the Apple Watch NikeLab, the new device is limited edition and designed to be “the ultimate style companion” for those who love to run.

The limited edition, neutral-toned Apple Watch NikeLab maintains the beloved features of its predecessor: deep integration with the Nike+ Run Club app, exclusive Siri commands, GPS, a two-times-brighter display and water resistance to 50 meters*, all made possible by a powerful dual-core processor and watchOS 3. ​

Apple Watch NikeLab will be available starting on April 27 from Nike.com, at NikeLab locations, and at the Apple Tokyo pop-up store at the Isetan department store. It will not be sold in Apple Stores or from the Apple website, a first for an Apple Watch.

The Apple Watch NikeLab will likely be priced at $369 for the 38mm model and $399 for the 42mm model, the same price as the rest of the Apple Watch Nike+ lineup.


Apple and Nike first teamed up in September of 2016 for the Nike+ Apple Watch that launched alongside Apple’s own set of Series 2 Apple Watch devices. Apple offers two Apple Watch Nike+ models in Silver and Space Gray aluminum along with standalone Apple Watch Nike+ bands.

Related Roundups: Apple Watch Series 2, watchOS 3
Buyer’s Guide: Apple Watch (Neutral)

https://www.raspberrypi.org/blog/raspberry-pi-resources-coding-all-ages/

RASPBERRY PI RESOURCES: CODING FOR ALL AGES

Following a conversation in the Pi Towers kitchen about introducing coding to a slightly older demographic, we sent our Events Assistant Olivia on a mission to teach her mum how to code. Here she is with her findings.

“I can’t code – I’m too old! I don’t have a young person to help me!”

I’ve heard this complaint many times, but here’s the thing: there are Raspberry Pi resources for all ages and abilities! I decided to put the minds of newbie coders at rest, and prove that you can get started with coding whatever your age or experience. For this task, I needed a little help. Here, proudly starring in her first Raspberry Pi blog, is my mum, Helen Robinson.

Helen looks at the learning resource.

My mum is great, but she’s not the most tech-savvy person. She had never attempted any coding before this challenge.

CODING SPINNING FLOWERS

To prove how easy it is to follow Raspberry Pi resources, I set her the challenge of completing the Spinning Flower Wheel project. She started by reading the Getting Started leaflet that we use on the Raspberry Pi stand at events such as Bett or Maker Faire. You can find the resource here, or watch Carrie Anne talk you through the project here.

She then made her flower pot (which admittedly is more of a heart pot, as I only had heart stickers).

Helen and her flower pot

My mum, with her love-ly heart pot.

She followed the resource to write her code in Python. Then, for the moment of truth, she pressed run. Her reaction was priceless.

She continued coding. She changed the speed of the wheel and added a button to start it spinning. Finally, she was able to add her flower heart pot to the wheel.

HERE’S TO YOU, MRS. ROBINSON

Although I sat with her throughout the build, I merely took photos while she did all the work. I’m proud to say that she completed the project all by herself – without help from me, or from “a young person”. I just made the tea!

We had so much fun completing the resource, and we would encourage all those curious about coding to give it a go. If my mum managed to do it – and enjoy it – anyone can!

http://www.kurzweilai.net/what-if-you-could-type-directly-from-your-brain-at-100-words-per-minute

What if you could type directly from your brain at 100 words per minute?

Former DARPA director reveals Facebook’s secret research projects to create a non-invasive brain-computer interface and haptic skin hearing
April 19, 2017

(credit: Facebook)

Regina Dugan, PhD, Facebook VP of Engineering, Building8, revealed today (April 19, 2017) at Facebook F8conference 2017 a plan to develop a non-invasive brain-computer interface that will let you type at 100 wpm — by decoding neural activity devoted to speech.

Dugan previously headed Google’s Advanced Technology and Projects Group, and before that, was Director of the Defense Advanced Research Projects Agency (DARPA).

She explained in a Facebook post that over the next two years, her team will be building systems that demonstrate “a non-invasive system that could one day become a speech prosthetic for people with communication disorders or a new means for input to AR [augmented reality].”

Dugan said that “even something as simple as a ‘yes/no’ brain click … would be transformative.” That simple level has been achieved by using functional near-infrared spectroscopy (fNIRS) to measure changes in blood oxygen levels in the frontal lobes of the brain, as KurzweilAI recently reported. (Near-infrared light can penetrate the skull and partially into the brain.)

Dugan agrees that optical imaging is the best place to start, but her Building8 team team plans to go way beyond that research — sampling hundreds of times per second and precise to millimeters. The research team began working on the brain-typing project six months ago and she now has a team of more than 60 researchers who specialize in optical neural imaging systems that push the limits of spatial resolution and machine-learning methods for decoding speech and language.

The research is headed by Mark Chevillet, previously an adjunct professor of neuroscience at Johns Hopkins University.

Besides replacing smartphones, the system would be a powerful speech prosthetic, she noted — allowing paralyzed patients to “speak” at normal speed.

(credit: Facebook)

Dugan revealed one specific method the researchers are currently working on to achieve that: a ballistic filter for creating quasi ballistic photons (avoiding diffusion) — creating a narrow beam for precise targeting — combined with a new method of detecting blood-oxygen levels.

Neural activity (in green) and associated blood oxygenation level dependent (BOLD) waveform (credit: Facebook)

Dugan also described a system that may one day allow hearing-impaired people to hear directly via vibrotactile sensors embedded in the skin. “In the 19th century, Braille taught us that we could interpret small bumps on a surface as language,” she said. “Since then, many techniques have emerged that illustrate our brain’s ability to reconstruct language from components.” Today, she demonstrated “an artificial cochlea of sorts and the beginnings of a new a ‘haptic vocabulary’.”

A Facebook engineer with acoustic sensors implanted in her arm has learned to feel the acoustic shapes corresponding to words (credit: Facebook)

Dugan’s presentation can be viewed in the F8 2017 Keynote Day 2 video (starting at 1:08:10).

(credit: Facebook)

https://www.nytimes.com/2017/04/19/climate/arctic-plastics-pollution.html?_r=0

Trillions of Plastic Bits, Swept Up by Current, Are Littering Arctic Waters

The world’s oceans are littered with trillions of pieces of plastic — bottles, bags, toys, fishing nets and more, mostly in tiny particles — and now this seaborne junk is making its way into the Arctic.

In a study published Wednesday in Science Advances, a group of researchers from the University of Cádiz in Spain and several other institutions show that a major ocean current is carrying bits of plastic, mainly from the North Atlantic, to the Greenland and Barents seas, and leaving them there — in surface waters, in sea ice and possibly on the ocean floor.

Because climate change is already shrinking the Arctic sea ice cover, more human activity in this still-isolated part of the world is increasingly likely as navigation becomes easier. As a result, plastic pollution, which has grown significantly around the world since 1980, could spread more widely in the Arctic in decades to come, the researchers say.

Andrés Cózar Cabañas, the study’s lead author and a professor of biology at the University of Cádiz, said he was surprised by the results, and worried about possible outcomes.

Fragments of fishing lines found in Arctic surface waters by the research team. CreditAndres Cozar

“We don’t fully understand the consequences the plastic is having or will have in our oceans,” he said. “What we do know is that this consequences will be felt at greater scale in an ecosystem like this” because it is unlike any other on Earth.

Every year, about 8 million tons of plastic gets into the ocean, and scientists estimate that there may be as much as 110 million tons of plastic trash in the ocean. Though the environmental effects of plastic pollution are not fully understood, plastic pollution has made its way into the food chain. Plastic debris in the ocean was thought to accumulate in big patches, mostly in subtropical gyres — big currents that converge in the middle of the ocean — but scientists estimate that only about 1 percent of plastic pollution is in these gyres and other surface waters in the open ocean.

Another model of ocean currents by one of the study’s authors predicted that plastic garbage could also accumulate in the Arctic Ocean, specifically in the Barents Sea, located off the northern coasts of Russia and Norway, which this study demonstrates.

The surface water plastic in the Arctic Ocean currently accounts for only about 3 percent of the total, but the authors suggest the amount will grow and that the seafloor there could be a big sink for plastic.

This particular part of the ocean is important in the thermohaline circulation, a deepwater global current dictated by differences in temperature and salinity around the world. As that current brings warm surface water up to the Arctic, it seems to be bringing with it plastic waste from more densely populated coastlines, dumping the now-fragmented pieces of plastic in the Arctic, where landmasses like Greenland and the polar ice cap trap them.

Scientists aboard the research vessel Tara lower nets into the water to collect plankton and microplastics.CreditAnna Deniaud/Tara Expeditions Foundation

The scientists sampled floating plastic debris from 42 sites in the Arctic Ocean aboard Tara, a research vessel that completed a trip around the North Pole from June to October 2013, with data from two additional sites from a previous trip. They scooped up plastic debris and determined the concentration of particles by dividing the dry weight of the plastic collected, excluding microfibers, by the area surveyed.

Almost all of the plastic, measured by weight, was in fragments, mostly ranging from 0.5 millimeters to 12.6 millimeters. The rest of the plastic appeared in the form of fishing line, film or pellets. This mix of plastic types is roughly consistent with the kinds of plastic that collect in the subtropical gyres, though those parts of the ocean amasses a higher concentration of fishing line.

The researchers did not find many large pieces of plastic, nor did they find much plastic film, which breaks down quickly, suggesting that the plastic has already been in the ocean for a while by the time it gets to the Arctic.

If the plastics were coming directly from Arctic coastlines, it would mean that people in the sparsely populated Arctic were depositing many more times the plastic in the ocean than people in other parts of the world, which is unlikely. Shipping is also relatively infrequent there and, the authors write, there is no reason to think that flotsam or jetsam in the Arctic would be so much higher than in other parts of the world.

The lesson from the study, Dr. Cózar Cabañas said, is that the issue of plastic pollution “will require international agreements.”

“This plastic is coming from us in the North Atlantic,” he said. “And the more we know about what happens in the Arctic, the better chance we have” of solving the problem.

http://www.bbc.com/news/technology-39648788

Facebook shares brain-control ambitions