Graphene can now be printed on materials like paper and plastic to create ubiquitous uses such as in RFID tags, wireless sensors, and wearable electronics
May 24, 2015
The first low-cost, flexible, environmentally friendly radio-frequency antenna using compressed graphene ink has been printed by researchers from the University of Manchester and BGT Materials Limited. Potential uses of the new process include radio-frequency identification (RFID) tags, wireless sensors, wearable electronics, and printing on materials like paper and plastic.
Commercial RFID tags are currently made from metals like silver (very expensive) or aluminum or copper (both prone to being oxidized).
Graphene conductive ink avoids those problems and can be used to print circuits and other electronic components, but the ink contains one or more polymeric, epoxy, siloxane, and resin binders. These are required to form a continuous (unbroken) conductive film. The problem is that these binders are insulators, so they reduce the conductivity of the connection. Also, applying the binder material requires annealing, a high-heat process (similar to how soldering with a resin binder works), which would destroy materials like paper or plastic.
Printing graphene ink on paper
So the researchers developed a new process:
1. Graphene flakes are mixed with a solvent and the ink it dried and deposited on the desired surface (paper, in the case of the experiment). (This is shown in step a in the illustration above.)
2. The flakes are compressed (step b above) with a roller (similar to using a roller to compress asphalt when making a road). That step increases the graphene’s conductivity by more than 50 times.
The researchers tested their compressed graphene laminate by printing a graphene antenna onto a piece of paper. The material radiated radio-frequency power effectively, said Xianjun Huang, the first author of the paper and a PhD candidate in the Microwave and Communications Group in the School of Electrical and Electronic Engineering.
The researchers plan to further develop graphene-enabled RFID tags, as well as sensors and wearable electronics. They present their results in the journal Applied Physics Letters from AIP Publishing.
Abstract of Binder-free highly conductive graphene laminate for low cost printed radio frequency applications
In this paper we demonstrate realization of printable RFID antenna by low temperature processing of graphene ink. The required ultra-low resistance is achieved by rolling compression of binder-free graphene laminate. With compression, the conductivity of graphene laminate is increased by more than 50 times compared to that of as-deposited one. Graphene laminate with conductivity of 4.3×104 S/m and sheet resistance of 3.8.
Could lead to chips that combine optical and electronic components
May 24, 2015
In a new discovery that could lead to chips that combine optical and electronic components, researchers at MIT, IBM and two universities have found a way to combine light and sound with far lower losses than when such devices are made separately and then interconnected, they say.
Light’s interaction with graphene produces vibrating electron particles called plasmons, while light interacting with hexagonal boron nitride (hBN) produces phonons (sound “particles”). Fang and his colleagues found that when the materials are combined in a certain way, the plasmons and phonons can couple, producing a strong resonance.
The properties of the graphene allow precise control over light, while hBN provides very strong confinement and guidance of the light. Combining the two makes it possible to create new “metamaterials” that marry the advantages of both, the researchers say.
According to Phaedon Avouris, a researcher at IBM and co-author of the paper, “The combination of these two materials provides a unique system that allows the manipulation of optical processes.”
The two materials are structurally similar — both composed of hexagonal arrays of atoms that form two-dimensional sheets — but they each interact with light quite differently. The researchers found that these interactions can be complementary, and can couple in ways that afford a great deal of control over the behavior of light.
The hybrid material blocks light when a particular voltage is applied to the graphene layer. When a different voltage is applied, a special kind of emission and propagation, called “hyperbolicity” occurs. This phenomenon has not been seen before in optical systems, Fang says.
Nanoscale optical waveguides
The result: an extremely thin sheet of material can interact strongly with light, allowing beams to be guided, funneled, and controlled by different voltages applied to the sheet.
The combined materials create a tuned system that can be adjusted to allow light only of certain specific wavelengths or directions to propagate, they say.
These properties should make it possible, Fang says, to create tiny optical waveguides, about 20 nanometers in size —- the same size range as the smallest features that can now be produced in microchips.
“Our work paves the way for using 2-D material heterostructures for engineering new optical properties on demand,” says co-author Tony Low, a researcher at IBM and the University of Minnesota.
Single-molecule optical resolution
Another potential application, Fang says, comes from the ability to switch a light beam on and off at the material’s surface; because the material naturally works at near-infrared wavelengths, this could enable new avenues for infrared spectroscopy, he says. “It could even enable single-molecule resolution,” Fang says, of biomolecules placed on the hybrid material’s surface.
Sheng Shen, an assistant professor of mechanical engineering at Carnegie Mellon University who was not involved in this research, says, “This work represents significant progress on understanding tunable interactions of light in graphene-hBN.” The work is “pretty critical” for providing the understanding needed to develop optoelectronic or photonic devices based on graphene and hBN, he says, and “could provide direct theoretical guidance on designing such types of devices. … I am personally very excited about this novel theoretical work.”
The research team also included Kin Hung Fung of Hong Kong Polytechnic University. The work was supported by the National Science Foundation and the Air Force Office of Scientific Research.
Abstract of Tunable Light–Matter Interaction and the Role of Hyperbolicity in Graphene–hBN System
Hexagonal boron nitride (hBN) is a natural hyperbolic material, which can also accommodate highly dispersive surface phonon-polariton modes. In this paper, we examine theoretically the mid-infrared optical properties of graphene–hBN heterostructures derived from their coupled plasmon–phonon modes. We find that the graphene plasmon couples differently with the phonons of the two Reststrahlen bands, owing to their different hyperbolicity. This also leads to distinctively different interaction between an external quantum emitter and the plasmon–phonon modes in the two bands, leading to substantial modification of its spectrum. The coupling to graphene plasmons allows for additional gate tunability in the Purcell factor and narrow dips in its emission spectra.
The Apple Store app has been updated. Now, version 3.3 offers users an additional layer of security—two-step authentication—as well as support for Touch ID.
Apple updated the Apple Store app for iOS on Thursday, May 21, to version 3.3, which brings the much-awaited support for Touch ID that will enable users to check receipts, make reservations at Apple Stores and avail of two-step authentication.
The Touch ID option can be spotted in the Account section. This feature can be deployed to view order history and access the EasyPay receipts.
The two-step verification feature isenabled via the My Apple ID page. It is available under the “Edit your Apple ID” menu under the “Password and Security” tab. Once the feature is activated, users will have to input their Apple ID password along with the four-digit authentication code the system has forwarded to the user’s trusted device.
Prior to the current update, users were required to key in their Apple ID password manually to get access to the in-app features. Now, the app will also ask for the Touch ID authentication instead of the password. Before the update, the usage of Touch ID was restricted to purchases made using Apple Pay.
The activation of this two-step authentication will bring tighter security over the purchases being made.
The new features version 3.3 brings to the iPhone are the following:
–Users can click on the heart icon to opt for their favorite Apple Watch model. The choices will be saved in the user’s account for comparison.
–A single touch enables easy check-out with Apple Pay.
–Customers in the U.S. can check their iPhone upgrade pricing and buy a new iPhone with ease.
–Users can purchase accessories easily using EasyPay at an Apple Retail Store. This feature, however, is currently available only in a few countries.
–U.S. customers can get access to store services and help on their smartphone’s lock screen based on their location in the Apple Retail Store.
The Apple Store app version 3.3 takes up 22.3MB and is available as a freedownload.
An increasing number of modern open-plan offices employ sound masking systems that raise the background sound of a room so that speech is rendered unintelligible beyond a certain distance.
PLAYING NATURAL SOUNDS such as flowing water in offices could lift workers’ moods and enhance productivity, a new study says. An increasing number of modern open-plan offices employ sound masking systems that raise the background sound of a room so that speech is rendered unintelligible beyond a certain distance and distractions are less annoying.
“If you are close to someone, you can understand them. But once you move farther away, their speech is obscured by the masking signal,” explained Jonas Braasch, acoustician and musicologist at the Rensselaer Polytechnic Institute in New York.
The sound masking system also improves cognitive abilities in addition to providing speech privacy. The natural sound used in the experiment was designed to mimic the sound of flowing water in a mountain stream.
“The mountain stream sound possessed enough randomness that it did not become a distraction. This is a key attribute of a successful masking signal,” added co-author Alana DeLoach.
Sound masking systems are custom designed for each office space by consultants and are typically installed as speaker arrays discretely tucked away in the ceiling. For the past 40 years, the standard masking signal employed is random, steady-state electronic noise – also known as “white noise”.
Using natural sounds as a masking signal could have benefits beyond the office environment. The workers, who are listening to natural sounds are more productive and overall in better moods than the workers exposed to traditional masking signals.
“You could also use it to improve the moods of hospital patients, who are stuck in their rooms for days or weeks on end,” Braasch noted. The authors were scheduled to present their findings at the 169th meeting of the Acoustical Society of America in Pittsburgh.
Judge: Anyone can copy iPhone design because it’s functional
A judge hearing the appeal that Samsung copied the iPhone’s design has ruled that since it is functional it is not protected. This likely means that not only has Samsung not infringed on Apple’s design, but others in the future can too.
The appeal was filed by Samsung seeking to overturn damages earlier awarded to the company.
Apple Watch OS version 1.0.1 has been released that adds features and security fixes. Apple also says that performance has been improved in the new version. Reviews of the Apple Watch have regularly complained about the slow app launch times.
The update comes shortly after the watch started shipping. Looks like the first thing that Apple Watch buyers must do when finally receiving their purchase is update the OS.
Report: Upcoming Home app to coordinate HomeKit accessories
Apple is reportedly going to announce a Home app for iOS that will pull all HomeKit functions into the one app. This would be a sort of hub for smart home (HomeKit) accessories that will soon be released.
The firm is not saying a word so take this with a grain of salt.
Apple Watch to be available in retail stores in June
Demand has been strong for the Apple Watch as it’s only been availlable for purchase online. That will change in a few weeks as Tim Cook has stated the watch will be in Apple retail stores globally in June.
Cook told employees in China that the Apple Watch will be available for purchase worldwide in June.
Since Apple announced its HomeKit smart home initiative last year, it’s been mostly quiet about just how iPhones and other Apple gadgets will wrangle those connected devices. Now, however, the company may have a fancy new app in the works—complete with virtual rooms, a clever and apparently easy-to-grasp metaphor for running a smart home.
Apple’s approach, according to a 9to5Mac report, will be to launch a new “Home” app for controlling smart-home gadgets—think smart locks, sensors, garage openers, thermostats, lights, security cameras and other connected appliances. The Home app will sort gadgets by function and location into a visual arrangement of virtual rooms
The goal is to simplify the otherwise bewildering task of finding, adding and controlling smart devices and appliances from Apple and other companies.
Smart homes are quite likely to be collections of disparate gadgets from various manufacturers that need to identify and share information with each other as well as with a controlling “hub.” Giving users an intuitive way to grasp what’s where and who’s doing what is something this industry badly needs.
Here’s what Apple’s take supposedly looks like.
The Kit And Kaboodle
When it comes to smart home systems, interfaces matter. Samsung’s still relatively new SmartThings division has a powerful, though complex, mobile app that it has been trying to simplify for users. Revolv, now owned by Google’s Nest division, used to offer an app with simple setup and management features, using graphical representations to symbolize connections to devices.
Apple’s version might be even easier. The app, which supposedly sports a house icon against a dark yellow background, reportedly connects to a user’s Apple TV, using that as a hub or stationary command center for the system. There’s still a big question mark over how well it works, though—the Apple blog says that in its current form, it has only basic, limited features, and so far, only Apple employees have been allowed to take it for a spin.
The new “Home” app—or whatever it will be officially called when it debuts (possibly with iOS 9)—seems like just the sort of thing Apple would want to spotlight at its Worldwide Developers Conference keynote in June. But that’s only if the app’s ready for public viewing, which isn’t at all certain yet.
As 9to5Mac notes, the app might be too basic and unrefined at this point. Even if it’s not, it might be intended for use solely within Apple’s walls as a testing or development tool.
If the latter is true, then people might manage their “Apple smart home” using their Siri voice assistant to control third-party apps. In essence, that would let people talk to their iPhones, Apple Watches and likely Apple TVs to remote control their home appliances.
Bring It On Home
Either way, Apple will have to pick a path and fairly soon. The tech giant announced its HomeKit framework last year, and it’s been losing steam in this area ever since. Rumors of more delays prompted an uncharacteristic Apple response in which it publicly promised that its first gadgets to support HomeKit will debut next month.
When they arrive, users will have to have something with which to manage them. Otherwise, it might start looking like Apple bit off more than it could chew in the complicated smart-home arena.
Simplicity is something this area sorely needs, if smart homes are ever going to attract broad interest. Of course, it has to be good, too. Launching an “Apple Maps bad” HomeKit initiative could ding the whole industry. It’s not hard to imagine even Apple loyalists (who are legion) walking away from a crummy experience and thinking, “If even Apple can’t make this work, then no one can.”
If Apple does launch the new Home app soon—and if it works—its new metaphor could go a long way toward helping newcomers understand just why they’d want to equip their homes with connected, smart gadgets. In that way, you can imagine the smart-home industry at large holding its breath as WWDC opens. Next month, we’ll know if it’s ready to exhale.
New neuroprosthetic implant captures intent to move, not the movement directly
May 22, 2015
Paralyzed from the neck down, Erik G. Sorto now can smoothly move a robotic arm just by thinking about it, thanks to a clinical collaboration between Caltech, Keck Medicine of USC and Rancho Los Amigos National Rehabilitation Center,
Previous neural prosthetic devices, such as Braingate, were implanted in the motor cortex, resulting in delayed, jerky movements. The new device was implanted in the posterior parietal cortex (PPC), a part of the brain that controls the intent to move, not the movement directly.
That makes Sorto, who has been paralyzed for over 10 years, the first quadriplegic person in the world to perform a fluid hand-shaking gesture or play “rock, paper, scissors,” using a robotic arm.
In April 2013, Keck Medicine of USC surgeons implanted a pair of small electrode arrays in two parts of the posterior parietal cortex, one that controls reach and another that controls grasp.
Each 4-by-4 millimeter array contains 96 active electrodes that, in turn, each record the activity of single neurons in the PPC. The arrays are connected by a cable to a system of computers that process the signals, to decode the brain’s intent and control output devices, such as a computer cursor and a robotic arm.
Although he was able to immediately move the robot arm with his thoughts, after weeks of imagining, Sorto refined his control of the arm.
Now, Sorto is able to execute advanced tasks with his mind, such as controlling a computer cursor; drinking a beverage; making a hand-shaking gesture; and performing various tasks with the robotic arm.
Designed to test the safety and effectiveness of this new approach, the clinical trial was led by principal investigator Richard Andersen, the James G. Boswell Professor of Neuroscience at Caltech, neurosurgeon Charles Y. Liu, professor of neurological surgery and neurology at the Keck School of Medicine of USC and biomedical engineering at USC, and neurologist Mindy Aisen, chief medical officer at Rancho Los Amigos.
Aisen, also a clinical professor of neurology at the Keck School of Medicine of USC, says that advancements in prosthetics like these hold promise for the future of patient rehabilitation.
“This research is relevant to the role of robotics and brain-machine interfaces as assistive devices, but also speaks to the ability of the brain to learn to function in new ways,” Aisen said. “We have created a unique environment that can seamlessly bring together rehabilitation, medicine, and science as exemplified in this study.”
Sorto has signed on to continue working on the project for a third year. He says the study has inspired him to continue his education and pursue a master’s degree in social work.
The results of the clinical trial appear in the May 22, 2015, edition of the journal Science. The implanted device and signal processors used in the clinical trial were the NeuroPort Array and NeuroPort Bio-potential Signal Processors developed by Blackrock Microsystems in Salt Lake City, Utah. The robotic arm used in the trial was the Modular Prosthetic Limb, developed at the Applied Physics Laboratory at Johns Hopkins.
This trial was funded by the National Institutes of Health, the Boswell Foundation, the Department of Defense, and the USC Neurorestoration Center.
Caltech | Next Generation of Neuroprosthetics: Science Explained — R. Andersen May 2015
Keck Medicine of USC | Next Generation of Neuroprosthetics: Erik’s Story
Abstract of Decoding motor imagery from the posterior parietal cortex of a tetraplegic human
Nonhuman primate and human studies have suggested that populations of neurons in the posterior parietal cortex (PPC) may represent high-level aspects of action planning that can be used to control external devices as part of a brain-machine interface. However, there is no direct neuron-recording evidence that human PPC is involved in action planning, and the suitability of these signals for neuroprosthetic control has not been tested. We recorded neural population activity with arrays of microelectrodes implanted in the PPC of a tetraplegic subject. Motor imagery could be decoded from these neural populations, including imagined goals, trajectories, and types of movement. These findings indicate that the PPC of humans represents high-level, cognitive aspects of action and that the PPC can be a rich source for cognitive control signals for neural prosthetics that assist paralyzed patients.
UC Berkeley researchers’ new algorithms enable robots to learn motor tasks by trial and error
May 22, 2015
UC Berkeley researchers have developed new algorithms that enable robots to learn motor tasks by trial and error, using a process that more closely approximates the way humans learn.
They demonstrated their technique, a type of reinforcement learning, by having a robot complete various tasks — putting a clothes hanger on a rack, assembling a toy plane, screwing a cap on a water bottle, and more — without pre-programmed details about its surroundings.
A new AI approach
“What we’re reporting on here is a new approach to empowering a robot to learn,” said Professor Pieter Abbeel of UC Berkeley’s Department of Electrical Engineering and Computer Sciences. “The key is that when a robot is faced with something new, we won’t have to reprogram it. The exact same software, which encodes how the robot can learn, was used to allow the robot to learn all the different tasks we gave it.”
The work is part of a new People and Robots Initiative at UC’s Center for Information Technology Research in the Interest of Society (CITRIS). The new multi-campus, multidisciplinary research initiative seeks to keep the advances in artificial intelligence, robotics and automation aligned to human needs.
“Most robotic applications are in controlled environments where objects are in predictable positions,” said UC Berkeley faculty member Trevor Darrell, director of the Berkeley Vision and Learning Center. “The challenge of putting robots into real-life settings, like homes or offices, is that those environments are constantly changing. The robot must be able to perceive and adapt to its surroundings.”
Neural-inspired learning
Conventional, but impractical, approaches to helping a robot make its way through a 3D world include pre-programming it to handle the vast range of possible scenarios or creating simulated environments within which the robot operates.
Instead, the UC Berkeley researchers turned to a new branch of artificial intelligence known as deep learning, which is loosely inspired by the neural circuitry of the human brain when it perceives and interacts with the world.
“For all our versatility, humans are not born with a repertoire of behaviors that can be deployed like a Swiss army knife, and we do not need to be programmed,” said postdoctoral researcher Sergey Levine. “Instead, we learn new skills over the course of our life from experience and from other humans. This learning process is so deeply rooted in our nervous system, that we cannot even communicate to another person precisely how the resulting skill should be executed. We can at best hope to offer pointers and guidance as they learn it on their own.”
In the world of artificial intelligence, deep learning programs create “neural nets” in which layers of artificial neurons process overlapping raw sensory data, whether it be sound waves or image pixels. This helps the robot recognize patterns and categories among the data it is receiving. People who use Siri on their iPhones, Google’s speech-to-text program or Google Street View might already have benefited from the significant advances deep learning has provided in speech and vision recognition.
Applying deep reinforcement learning to motor tasks in unstructured 3D environments has been far more challenging, however, since the task goes beyond the passive recognition of images and sounds.
BRETT masters human tasks on its own
In the experiments, the UC Berkeley researchers worked with a Willow Garage Personal Robot 2 (PR2), which they nicknamed BRETT, or Berkeley Robot for the Elimination of Tedious Tasks.
They presented BRETT with a series of motor tasks, such as placing blocks into matching openings or stacking Lego blocks. The algorithm controlling BRETT’s learning included a reward function that provided a score based upon how well the robot was doing with the task.
BRETT takes in the scene, including the position of its own arms and hands, as viewed by the camera. The algorithm provides real-time feedback via the score based upon the robot’s movements. Movements that bring the robot closer to completing the task will score higher than those that do not. The score feeds back through the neural net, so the robot can learn which movements are better for the task at hand.
This end-to-end training process underlies the robot’s ability to learn on its own. As the PR2 moves its joints and manipulates objects, the algorithm calculates good values for the 92,000 parameters of the neural net it needs to learn.
With this approach, when given the relevant coordinates for the beginning and end of the task, the PR2 could master a typical assignment in about 10 minutes. When the robot is not given the location for the objects in the scene and needs to learn vision and control together, the learning process takes about three hours.
Abbeel says the field will likely see significant improvements as the ability to process vast amounts of data improves.
“With more data, you can start learning more complex things,” he said. “We still have a long way to go before our robots can learn to clean a house or sort laundry, but our initial results indicate that these kinds of deep learning techniques can have a transformative effect in terms of enabling robots to learn complex tasks entirely from scratch. In the next five to 10 years, we may see significant advances in robot learning capabilities through this line of work.”
The latest developments will be presented on Thursday, May 28, in Seattle at the International Conference on Robotics and Automation (ICRA). The Defense Advanced Research Projects Agency, Office of Naval Research, U.S. Army Research Laboratory and National Science Foundation helped support this research.
UC Berkeley Campus Life | BRETT the robot learns to put things together on his own
Stem cell scientists led by Mick Bhatia from the McMaster University have successfully converted adult human blood cells into neural cells.The team directly converted adult human blood cells to both central nervous system (brain and spinal cord) neurons as well as neurons in the peripheral nervous system (rest of the body) that are responsible for pain, temperature and itch perception.This means that how a person’s nervous system cells react and respond to stimuli can be determined from his blood.”Now we can take blood samples and make the main cell types of neurological systems – the central nervous system and the peripheral nervous system – in a dish that is specialised for each patient. Nobody has ever done this with adult blood. Ever,” explained Bhatia, director of the McMaster Stem Cell and Cancer Research Institute.Bhatia’s team successfully tested their process using fresh blood as well as frozen blood.Scientists can actually take a patient’s blood sample and with it, can produce one million sensory neurons that make up the peripheral nerves in short order with this new approach.”We can also make central nervous system cells, as the blood to neural conversion technology we developed creates neural stem cells during the process of conversion,” Bhatia noted.The revolutionary, patented direct conversion technology has “broad and immediate applications”.It allows researchers to start asking questions about understanding disease and improving treatments such as: Why is it that certain people feel pain versus numbness? Is this something genetic? Can the neuropathy that diabetic patients experience be mimicked in a dish?It also paves the way for the discovery of new pain drugs that do not just numb the perception of pain.In the future, the process may have prognostic potential, explained Bhatia, in that one might be able to look at a patient with Type 2 diabetes and predict whether they will experience neuropathy by running tests in the lab using their own neural cells derived from their blood sample.”This bench to bedside research is very exciting and will have a major impact on the management of neurological diseases, particularly neuropathic pain,” added Akbar Panju, medical director of the Michael G. DeGroote Institute for Pain Research and Care.This research will help scientists provide personalized medical therapy for patients suffering with neuropathic pain, the authors concluded.The breakthrough was detailed in the journal Cell Reports. — IANS
A new research centre focusing on brain aging is set to launch in the fall, with funding from the federal and Ontario governments, as well as private donors.
The Canadian Centre for Aging and Brain Health Innovation at Baycrest Health Sciences in Toronto will combine brain research, clinical and educational programs in one national hub.
The federal government is committing up to $42 million over five years, Ontario is earmarking $23.5 million over the same time, while $25 million will come from the Baycrest Foundation and another $33 million from other donors.
Baycrest, which specializes in geriatric residences and healthcare, says the new centre will bring together science, healthcare and industry partners dedicated to brain health and senior care.
Federal Finance Minister Joe Oliver says age-related cognitive impairment is a growing issue across the country and he expects the centre’s “cutting-edge research and world-leading innovation” will make life better for seniors.
Ontario Premier Kathleen Wynne says brain disorders “take an enormous toll on a personal level” as well as on the economy.