http://mashable.com/2016/05/24/color-e/#2YFCRi58nqqg

The future of ultra-low-powered displays is finally in living color

E ink, the company behind the pigment-based, low-energy monochromatic displays found in many of today’s popular readers finally figured out how to create up to 32,000 colors in what is almost the exact same technology.

The company will unveil what it’s calling a breakthrough technology on Tuesday at the annual SID Display Week conference in San Francisco.

“For the first time, we can create all colors at every pixel location,” said E Ink Holding’s Head of Global marketing Giovanni Mancini. “We have encapsulated four different things in one micro-cup.”

Those four things are actually four different pigments: yellow, cyan, magenta and white. In traditional monochromatic E Ink, there were just two colors: black and white. Both microcups work in similar ways, E Ink changes the polarity to move the pigments around. For monochrome, the white and black pigments basically switch places (you see white or black on the reflective screen). However, for the new full-color electrophoretic display, E Ink had to figure out a more sophisticated way to manage the pigments in each tiny cup.

“The ability to control those pigments is significant,” said Mancini

In addition to being different colors, each pigment will have additional properties that gives E Ink greater control over their movement and position. This allows E Ink to move some or all together to create combinations that result in up to 32,000 display colors. Since each cup basically represents a controllable pixel, the results can be pretty stunning.

Color that lasts

The new display, which E Ink will publicly demonstrate for the first time, is a 20-inch, 2500 x 1600 resolution display that actually shares monochrome E Ink’s impressive power capabilities. Mancini told Mashable that it’s equally power-efficient. He explained that it could be used in bus stop signage. “Bus stops are powered with solar cells, you could power this with solar cells,” he said.

color E Ink

A color E Ink display rendering a full-color image.

IMAGE: E INK

E Ink is not alone in the low-power color display market. Qualcomm’s full-color Mirasol display technology has been around for more than six years. Instead of pigments, capsules and polarity, it uses a fully mechanical system to create a wide color gamut and uses almost zero power to maintain the image once it’s set. Last year, Apple reportedly bought Qualcomm’s Mirasol plant. No word yet on if Apple plans to productize the low-power color display technology.

Not for everyone

While color E Ink is on the fast-track to commercialization, it does have some significant limitations. For now, the resolution is 150 pixels per inch (ppi), which is roughly half the resolution you find on a typical, 6-inch, monochromatic E Ink display. In addition, the full-color E Ink can’t come anywhere close to the virtual instant refresh capabilities of today’s ereaders. Right now, it takes a color E Ink display two seconds to fully resolve.

E Ink color technology

Here is how color E Ink recreates its 32,000 colors.

IMAGE: E INK

Mancini doesn’t consider this an issue, though, because the company is targeting commercial signage, which wouldn’t need to change that often and is also designed to grab attention, an area where color E Ink may excel. Mancini said color E Ink will feature highly saturated colors. “Something close to what you would see on printed poster, paper type of product.”

He also noted that even though the prototype will be less than 2-feet wide, size is actually only limited by E Ink’s manufacturing capabilities.

General availability for color E Ink displays is expected in two years. Will color E Ink make it to future Amazon Kindles? Mancini would only reiterate that E Ink is currently focused on the display market.

http://www.techradar.com/news/digital-home/this-new-lightbulb-will-give-you-energy-during-the-day-and-help-you-sleep-at-night-1322071

This new lightbulb will give you energy during the day, and help you sleep at night

Philips’ family of smart lighting products have gotten a new member in the form of the Philips Hue white ambience light bulb.

The bulb’s focus is on providing a bright, natural-looking light which the company claims will help users feel more energised throughout the day. The bulb is also capable of outputting a warmer shade of light in the evenings which should help you get to sleep.

When combined with the routines function introduced in the last Hue app update, the bulbs can be set to automatically brighten throughout the day to give you more energy, and then dim at night to help you sleep.

Sleepy lighting and moody lighting

Scientists are increasingly recognising the effect lighting can have on the human psyche.

People are recommended not to use a screen within an hour of going to bed because of the way the blue light can trick the brain into thinking it’s earlier in the day and can hence interfere with sleep.

In response, programs such as f.lux have been introduced, which turn give your screen an orange tinge after sunset to prevent it affecting your sleep. Apple introduced similar functionality in its recent iOS 9.3 update.

Bright light is also seen as increasingly important for mood during the day. In fact, the absence of natural light in winter leads many to suffer from ‘Seasonal Affective Disorder’ (SAD), resulting in low moods during the winter months.

Philips isn’t claiming that the white ambience light bulb can be used to treat SAD like a lightbox would, but the idea that strong white light gives you energy is one that is commonly accepted within the scientific community.

The new ambience light is available now as a single bulb for $29.95 (£25.95) or as part of a starter kit which includes a wall-mountable dimmer switch for $129.95 (£99.95) and works with a number of existing home automation solutions including Amazon’s Alexa, Nest and Samsung SmartThings.

http://www.kurzweilai.net/robots-learn-to-cut-through-clutter

Robots learn to cut through clutter

Exploit creative “superhuman” capabilities
May 20, 2016
[+]

New software developed by Carnegie Mellon University helps mobile robots deal efficiently with clutter, whether it is in the back of a refrigerator or on the surface of the moon. (credit: Carnegie Mellon University Personal Robotics Lab)

Carnegie Mellon University roboticists have developed an algorithm that helps robots cope with a cluttered world.

Robots are adept at picking up an object in a specified place (such as in a factory assembly line) and putting it down at another specified place (known as “pick-and-place,” or P&P, processes). But homes and other planets, for example, are a special challenge for robots.

When a person reaches for a milk carton in a refrigerator, he doesn’t necessarily move every other item out of the way. Rather, a person might move an item or two, while shoving others out of the way as the carton is pulled out.

Robot creativity

Robot employs a “push and shove” method (credit: Jennifer E. King et al./Proceedings of IEEE International Conference on Robotics and Automation)

In tests, the new “push and shove” algorithm helped a robot deal efficiently with clutter, but surprisingly, it also revealed the robot’s creativity in solving problems.

“It was exploiting sort of superhuman capabilities,” Siddhartha Srinivasa, associate professor of robotics, said of his lab’s two-armed mobile robot, the Home Exploring Robot Butler, or HERB. “The robot’s wrist has a 270-degree range, which led to behaviors we didn’t expect. Sometimes, we’re blinded by our own anthropomorphism.”

In one case, the robot used the crook of its arm to cradle an object to be moved. “We never taught it that,” Srinivasa said.

K-Rex rover prototype (credit: NASA)

The new algorithm was also tested on NASA’s KRex robot, which is being designed to traverse the lunar surface. While HERB focused on clutter typical of a home, KRex used the software to find traversable paths across an obstacle-filled landscape while pushing an object.

A “rearrangement planner” automatically finds a balance between the two strategies (pick-and-place vs. push-and-shove), Srinivasa said, based on the robot’s progress on its task. The robot is programmed to understand the basic physics of its world, so it has some idea of what can be pushed, lifted, or stepped on. And it can be taught to pay attention to items that might be valuable or delicate.

They researchers presented their work last week (May 19) at the IEEE International Conference on Robotics and Automation in Stockholm, Sweden. NASA, the National Science Foundation, Toyota Motor Engineering and Manufacturing, and the Office of Naval Research supported this research.


Abstract of Rearrangement Planning Using Object-Centric and Robot-Centric Action Spaces

This paper addresses the problem of rearrangement planning, i.e. to find a feasible trajectory for a robot that must interact with multiple objects in order to achieve a goal. We propose a planner to solve the rearrangement planning problem by considering two different types of actions: robot-centric and object-centric. Object-centric actions guide the planner to perform specific actions on specific objects. Robot-centric actions move the robot without object relevant intent, easily allowing simultaneous object contact and whole arm interaction. We formulate a hybrid planner that uses both action types. We evaluate the planner on tasks for a mobile robot and a household manipulator.

http://www.kurzweilai.net/british-researchers-google-design-modular-shape-shifting-mobile-devices

British researchers, Google design modular shape-shifting mobile devices

May 20, 2016

Cubimorph is an interactive device made of a chain of reconfigurable modules that shape-shifts into any shape that can be made out of a chain of cubes, such as transforming from a mobile phone to a game console. (credit: Anne Roudaut et al./Proceedings of the ICRA 2016)

British researchers and Google have independently developed revolutionary concepts for Lego-like modular interactive mobile devices.

The British team’s design, called Cubimorph, is constructed of a chain of cubes. It has touchscreens on each of the six module faces and uses a hinge-mounted turntable mechanism to self-reconfigure in the user’s hand. One example: a mobile phone that can transform into a console when a user launches a game.

[+]

Proof-of-concept prototype of Cubimorph (credit: BIG/University of Bristol)

The research team has developed three prototypes demonstrating key aspects — turntable hinges, embedded touchscreens, and miniaturization.


BIG | Cubimorph: Designing Modular Interactive Devices

The modular interactive design is a step toward the vision of programmable matter, where interactive devices change their shape to meet specific user needs.

The research is led by Anne Roudaut, PhD, from the Department of Computer Science at the University of Bristol and and co-leader of the BIG (Bristol Interaction Group), in collaboration with academics at the Universities of PurdueLancaster and Sussex.

The research was presented last week at the International Conference on Robotics and Automation (ICRA).

Google’s Ara

Ara (credit: Google)

Ara, launched at Google’s I/O developer conference, uses a frame that contains all the functionality of a smartphone (CPU, GPU, antennas, sensors, battery, and display) plus six flexible slots for easy swapping of modules. “Slide any Ara module into any slot and it just works,” is the concept. Powering this is Greybus, a new bit of software deep in the Android stack that supports instantaneous connections, power efficiency, and data-transfer rates of up to 11.9 Gbps. The Developer Edition will ship in Fall 2016, with a consumer version in 2017.


Google | Ara: What’s next


Abstract of Cubimorph: Designing Modular Interactive Devices

We introduce Cubimorph, a modular interactive device that accommodates touchscreens on each of the six module faces, and that uses a hinge-mounted turntable mechanism to self-reconfigure in the user’s hand. Cubimorph contributes toward the vision of programmable matter where interactive devices reconfigure in any shape that can be made out of a chain of cubes in order to fit a myriad of functionalities, e.g. a mobile phone shifting into a console when a user launches a game. We present a design rationale that exposes user requirements to consider when designing homogeneous modular interactive devices. We present our Cubimorph mechanical design, three prototypes demonstrating key aspects (turntable hinges, embedded touchscreens and miniaturization), and an adaptation of the probabilistic roadmap algorithm for the reconfiguration.

references:

  • Roudaut, Anne; Krusteva, Diana; McCoy, Mike; Karnik, Abhijit; Ramani, Karthik; Subramanian, Sriram. Cubimorph: Designing Modular Interactive Devices. Proceedings of the ICRA 2016 IEEE International Conference on Robotics and Automation (in press)

http://www.digitaljournal.com/pr/2949213

Mitsubishi Electric, Kyoto Univ. and Tohoku Univ. Succeed in World’s First 3 Tesla MRI with High-Temperature Coils

TOKYO–(Business Wire)–Mitsubishi Electric Corporation (TOKYO:6503), Kyoto University and Tohoku University announced today the world’s first successful 3 tesla Magnetic Resonance Imaging (MRI) using a small model MRI with high-temperature superconducting coils that do not require cooling with increasingly scarce liquid helium. Mitsubishi Electric expects that the high-quality images made possible at this magnetic field strength will contribute to earlier detection of illnesses.

Mitsubishi Electric, Kyoto University and Tohoku University plan to increase the size of the system to one half of a full-size MRI scanner by 2020 and to commercialize a full-size version from 2021.

Mitsubishi Electric achieved a strong, stable 3 tesla magnetic field by increasing the precision of the coil winding. Existing commercially available MRIs use low-temperature superconducting wires with a round or square cross section of 2- to 3-millimeters. The high-temperature superconducting wires are about 0.2 millimeter thick and 4- to 5-millimeters wide and are usually wound several hundred times, creating a pancake coil. Small discrepancies in the thickness and width of the wire give the coil an uneven height that can disrupt the magnetic field and distort imaging. Mitsubishi Electric solved this problem by using laser displacement meters to measure the coil height and then adjusting it with correction sheets. This realized a winding accuracy of 0.1 millimeter for pancake coils with an outer diameter of about 400 millimeters, achieving the magnetic field homogeneity required for commercial imaging.

The small model has an imaging space 25 millimeters in diameter with field homogeneity of less than two-millionths, the same level required for a 230-mm dia. x 650-mm cylinder in a commercial-size MRI. Using this new approach, Mitsubishi Electric succeeded in imaging a 25-millimeter mouse fetus at 3 tesla.

 

http://mobilesyrup.com/2016/05/23/googles-project-soli-fits-into-a-smartwatch-now-but-you-cant-have-one-yet/

Google’s Project Soli fits into a smartwatch now, but you can’t have one yet

http://www.macrumors.com/2016/05/23/apple-seeds-first-beta-of-tvos-9-2-2/

Apple Seeds First Beta of tvOS 9.2.2 to Developers

Apple today provided developers with the first beta of an upcoming 9.2.2 update to tvOS, the operating system designed to run on the fourth-generation Apple TV. tvOS 9.2.2 comes one week after the public launch of tvOS 9.2.1, a minor update focusing on bug fixes.

tvOS betas are more difficult to install than beta updates for iOS and OS X. Installing the tvOS beta requires the Apple TV to be connected to a computer with a USB-C to USB-A cable, with the software downloaded and installed via iTunes or Apple Configurator. Once a beta profile has been installed on the device through iTunes, new beta releases will be available over the air.

apple-tv-4th-gen
As a minor 9.x.x update, tvOS 9.2.2 is likely to focus on bug fixes and performance improvements to address issues discovered since the release of tvOS 9.2.1, and Apple’s release notes do say the update contains bug fixes and security improvements. Any outward-facing changes found in the tvOS 9.2.2 beta will be included below.

http://www.digitaljournal.com/pr/2948472

Bug-zapping Lasers Add New Weapon to our Insect-fighting Arsenal

WASHINGTON–(Business Wire)–Control and monitoring of disease-vector insects are critical to global health, as insect vectors spread pathogens among humans, animals and agricultural products, creating worldwide strain on health care and food resources. Mosquito-borne malaria, for example, caused over 200 million infections and over 400,000 deaths in 2015, according to the World Health Organization. A small insect is also to blame for the jump in the price of orange juice in recent years. The Asian citrus psyllid, a vector of citrus greening disease, has devastated orange groves in Florida and threatened citrus production around the world.

Traditional insect control methods broadly rely on chemical insecticides, which may harm humans and beneficial insects and cause insecticide resistance in the target pests. A research team from Intellectual Ventures Laboratory, Washington State, USA, have developed a novel laser system called the “Photonic Fence,” which can effectively identify, track and kill flying insects in real-time, shooting down insects with a low-energy laser without harming other organisms, animals or humans.

Originally invented for controlling certain types of mosquitoes that carry malaria, the ‘Photonic Fence’ system has been adapted for more general applications in pest control for agriculture. It has now reached a stage where field deployment is practical. This week in the journal Optics Express, from The Optical Society (OSA), the researchers describe the work.

In the study, conducted in collaboration with United States Department of Agriculture (USDA) personnel, the researchers selected two important insect vectors as experimental subjects: Diaphorina citri psyllids, a vector of citrus greening disease and Anopheles stephensi mosquitoes, a vector of malaria.

“Our study showed that the ‘Photonic Fence’ is able to effectively track and distinguish between different insects by measuring insects’ wing beat frequency,” said 3ric Johanson, primary investigator and a project scientist, Intellectual Ventures Laboratory. “We also confirmed that low-power lasers can indeed lethally disable the Asian citrus psyllid. These findings position the ‘Photonic Fence’ as an excellent tool to help citrus growers contain and eventually eliminate citrus greening disease.”

According to Johanson, the ‘Photonic Fence’ is an electro-optical system that employs lasers, detectors and sophisticated computer software to search, detect, identify and shoot down insects in flight in real-time.

First, the optical tracking subsystem identifies targets from an insect database based on the characteristic data from insects, including flight behavior, insect size, insect shape and wing beat frequency (the measure of how fast the insect is flapping its wings). With this data the system decides whether a specific insect should be eliminated or not. Second, the safety interlock subsystem confirms there are no other organisms nearby that could be subjected to collateral damage. Finally, the lethal laser is employed to disable the insect target. The entire process, spanning from initial target acquisition through the application of the lethal dose, takes less than 100 milliseconds, Johanson said.

“Used as a virtual fence, the ‘Photonic Fence’ can be deployed as a perimeter defense around villages, hospitals, crop fields, etc. Over time, the population of target insects inside the protected region would be decreased to the point of collapse,” he explained.

The researchers believe the ‘Photonic Fence’ presents a potential new way to monitor and control insects. A particularly useful case would involve a small number of insects moving into a sensitive area in which current abatement techniques are not effective.

“A few good examples are organic farms and greenhouses,” Johanson said. “These areas are difficult to control for pests using organic means. A ‘Photonic Fence’ installation, which is inherently organic, could not only reduce the population of yield-reducing pests, but also inform the grower what kind of pests are present and when. Armed with this information, the grower could choose to use traditional insecticides in a precise, pin-pointed manner to stop the flying pests.”

The researchers have established a database of insects that they have experimented on so far, and the database will continue to grow as the researchers test the system in new environments.

“Wherever a ‘Photonic Fence’ installation is deployed, we have situational awareness about the types of insects that are present, the insect density, and the time of day when insects are more prevalent, all on an up-to-the-second basis,” Johanson explained. “Now imagine hundreds of thousands of ‘Photonic Fence’ units reporting back to a central data aggregation system. Using this information on a regional, state or national level, we can make decisions about where and when to concentrate our pest-control efforts, whether they should be photonic or traditional. We will now be able to understand, for the first time on this resolution, the trends in insect behavior and the impact that our pest control efforts are having.”

The researchers said the study is part of a larger project supported by the “Global Good Fund” to assess the impact on a number of disease-carrying mosquitoes and agricultural pests. As the Global Good Fund is primarily focused on using technology to improve people’s lives in the developing world, the researchers’ goal is to eventually make the new system deployable to the developing world for malaria eradication. In order to make the technology economically viable for the developing world, they are also exploring developed world applications such as those in agriculture and hospitality markets. Lead by Arty Makagon, the commercialization team aims to deploy in one or more of these markets in the coming years, scale up production and collect product robustness and effectiveness data prior to unleashing the technology in Africa and beyond.

Paper: 3ric Johanson, “Laser system for identification, tracking, and control of flying insects,” Opt. Express Vol. 24, Issue 11, pp. 11828-11838 (2016).

DOI: 10.1364/OE.24.011828

 

http://indianexpress.com/article/technology/tech-news-technology/google-ai-bots-custom-chip-io-2016-2814080/

Google is building its own chip called Tensor Processing Unit for AI

Google has designed its very own custom chip for deep neural networks calling it a TPU, Tensor Processing Unit.

Google Chip, Google's own Chip, Google TPU, Google AI processors, Google TensorFlow, Google TPU AI processor, Google, Google custom processor, Google I/O 2016, tech news, technologyGoogle’s Tensor Processing Unit is delivering an order of magnitude better-optimized performance per watt for machine learning (Source: Google)

Google has designed its very own custom chip for deep neural networks, the technology behind its artificial intelligence based internet services.

At Google I/O, CEO Sundar Pichai spoke about company’s self-designed ASIC or application-specific integrated circuit to drive deep neural networks. Google is using AI to identify objects and faces inside photos and same AI processes ‘Google Now’ voice queries.

Google is in fact using the same technology to deliver smart home speaker with Google Assistant built-in. Google calls the chip as Tensor processing unit, based on company’s TensorFlow software engine.

Google, like Microsoft and Facebook, is betting heavily on AI for its future. At I/O 2016, Google employees spoke in lengths about company’s AI technology and machine learning. “We’ve been running TPUs inside our data centers for more than a year, and have found them to deliver an order of magnitude better-optimized performance per watt for machine learning,” Google said in a blog post. Google developing its very own chip could mean trouble for traditional chipmakers like Intel who are underpinning hopes on AI as their next growth segment.

http://gadgets.ndtv.com/wearables/news/google-unveils-project-soli-20-radar-based-gesture-tracking-for-wearables-iot-840595

Google Unveils Project Soli 2.0, Radar-Based Gesture Tracking for Wearables, IoT

Google’s annual developer conference saw some huge announcements from the company on the three-day extravaganza which included Android N improvements, Daydream VR ecosystem, Android app support on Chrome OS, Allo and Duo, Google Home, Google Assistant, Android TV, Android Auto, and more. The search engine giant however had more in store for the last day of I/O 2016 where Google’s ATAP (Advanced Technologies and Projects) took the stage and showed off improvements to its gesture tracking technology, Project Soli, and its touch-sensitive fabric technology, Project Jacquard.

To recall, Google had briefly touched upon Project Soli last year during the I/O conference though the company made no further announcement about the project since last year. The company showed off the new Soli chip which incorporates the sensor and antenna array into an ultra-compact 8x10mm package.

Dan Kaufman, Director of Google’s ATAP team, during the keynote revealed some improvements to the Project Soli chip brings including reduced power consumption, roughly 5 percent improvement from the original chip. The new Soli chip also reduces computational power. These improvements finally make it usable in consumer facing products, though the company has not revealed a timeline for launch.

project_soli_google_io_2016.jpgProject Soli is a new way of touchless interactions. It is a sensing technology that uses miniature radar to detect touchless gesture interactions. Talking about some of the devices where Project Soli can be embedded include wearables, phones, computers, cars and IoT devices. Google at the I/O 2016 also showed new concept hardware made in collaboration with LG and Harman.

Google showed a concept smartwatch made using a LG Watch Urbane featuring Soli chip at I/O which worked completely based on gestures and could track small movements like waving of fingers. Ivan Poupyrev, Technical Program Lead at Google’s ATAP, previewed some virtual tool gestures such as for interacting with the smartwatch users will need to use hands from a distance which will allow them to scroll through messages, and by pulling hand next to the watch users to interact with the watch. The watch basically responds to the proximity of the hands to the watch. At I/O 2016, Google also showed a speaker from JBL by Harman which featured Soli chip and allowed hand gestures to control the music from the speaker.

“Soli is a purpose-built interaction sensor that uses radar for motion tracking of the human hand,” explained Poupyrev. He claimed that the Soli’s sensor can track sub-millimeter motion at high speeds with great accuracy.

“We’re creating a ubiquitous gesture interaction language that will allow people to control devices with a simple, universal set of gestures,” added Poupyrev.

“Imagine an invisible button between your thumb and index fingers – you can press it by tapping your fingers together. Or a Virtual Dial that you turn by rubbing thumb against index finger. Imagine grabbing and pulling a Virtual Slider in thin air. These are the kinds of interactions we are developing and imagining,” details Google’s Project Soli page.

Poupyrev said the Soli sensor technology works by emitting electromagnetic waves in a broad beam. When objects within the beam scatter the energy, it reflects some portion back towards the radar antenna.

Kaufman at the Google I/O ATAP session also revealed that the company had shipped Soli Alpha Dev Kits to select developers last year and also confirmed that announcements about next Dev Kit application can be expected in fall 2016. For more technical details, you can watch Google I/O session on ATAP here.

project_soli_beta_io2016.jpgAs for Project Jacquard, Google and Levi’s unveiled the first consumer facing product with the technology at I/O 2016 – the Levi’s Commuter Trucker Jacket. It will be available in a limited beta this year, before being launched for the general public next year.

Download the Gadgets 360 app for Android and iOS to stay up to date with the latest tech news, product reviews, and exclusive deals on the popular mobiles.