Disabled people remotely pilot robot in another country with their thoughts

June 24, 2015

(Left) Patient at a clinic operating the BCI. (Right) Telepresence robot equipped with infrared sensors for obstacle detection and Skype video connection on top. (credit: Robert Leeb et al./Proceedings of the IEEE)

Using a telepresence system developed at the École Polytechnique Fédérale de Lausanne (EPFL ), 19 people — including nine quadriplegics — were able to remotely control a robot located in an EPFL university lab in Switzerland.

A team of researchers at the Defitech Foundation Chair in Brain-Machine Interface (CNBI), headed by professor José del R. Millán, developed a brain-computer interface (BCI) system, using electroencephalography (EEG) signals.

This multi-year research project was intended to give a measure of independence to paralyzed people. The research involved 19 subjects (nine disabled and ten healthy) located in Italy, Germany and Switzerland. For several weeks, each of the subjects put on a BCI helmet and instructed the robot to move, transmitting their instructions in real time via Internet from their home country.

The robot used a video camera, screen, and wheels, similar to the commercially available Beam system. The robot transmitted the image from its camera and displayed the face of the remote pilot, both via Skype.

Shared control between human and machine

“Each of the 9 subjects with disabilities managed to remotely control the robot with ease after less than 10 days of training,” said Millán. The robot was able to avoid obstacles by itself, even when told not told to.

The tests revealed no difference in piloting ability between healthy and disabled subjects. In the second part of the tests, the disabled people with residual mobility were asked to pilot the robot with the movements they were still capable of doing, for example, by simply pressing the side of their head on buttons placed nearby. They piloted the robot just as if they were using their thoughts.

The positive results of this research bring to a close the European project called TOBI (Tools for Brain-Computer Interaction), which began in 2008. The research is discussed in the June special edition of Proceedings of the IEEE, dedicated to brain-machine interfaces.

École polytechnique fédérale de Lausanne (EPFL) | Telepresence robots can give people with disabilities the feeling of being home

Abstract of Towards Independence: A BCI Telepresence Robot for People With Severe Motor Disabilities

This paper presents an important step forward towards increasing the independence of people with severe motor disabilities, by using brain-computer interfaces to harness the power of the Internet of Things. We analyze the stability of brain signals as end-users with motor disabilities progress from performing simple standard on-screen training tasks to interacting with real devices in the real world. Furthermore, we demonstrate how the concept of shared control-which interprets the user’s commands in context-empowers users to perform rather complex tasks without a high workload. We present the results of nine end-users with motor disabilities who were able to complete navigation tasks with a telepresence robot successfully in a remote environment (in some cases in a different country) that they had never previously visited. Moreover, these end-users achieved similar levels of performance to a control group of 10 healthy users who were already familiar with the environment.

references:
Robert Leeb, Luca Tonin, Martin Rohm, Lorenzo Desideri, Tom Carlson, and José del R. Millán. Towards Independence: A BCI Telepresence Robot for People With Severe Motor Disabilities. Proceedings of the IEEE (Volume:103 , Issue: 6 ) DOI: 10.1109/JPROC.2015.2419736
related:
Disabled people pilot a robot remotely with their thoughts

http://www.kurzweilai.net/disabled-people-pilot-a-robot-remotely-with-their-thoughts

Is your thermostat spying on you? Cyberthreats and the Internet of Things

The Internet of Things (IoT) is beginning to have a huge impact on our daily lives, and it will grow by orders of magnitude. However, the multitude of IoT devices with zero, limited or outdated security could produce disastrous results. It will be a formidable task to secure every small IoT device or toy. Security solutions that watch device behavior and identify anomalies might be our only hope.

The IoT is on the rise…

Researchers to use Google cloud to aid in genome analysis
Researchers at MIT and Harvard are partnering with Google to use its cloud platform to take genomic
READ NOW
The genesis of IoT goes back to the early ’90s when PARC chief scientist Mark Weiser came up with the vision of Ubiquitous Computing and Calm Technology. In this vision, computing becomes “your quiet, invisible servant” and disappears from conscious actions and the environment of the user.

In recent years, smartphones, smartwatches, smart home appliances and more have brought us closer to that vision. The Internet of Things stresses the technology-focused aspect of this vision — the idea of autonomous intercommunication of small Internet-enabled devices with the aim of learning and anticipating observable user behavior. While the IoT acronym has come into vogue recently, it is no new thing inside enterprise networks. IoT includes printers, phones, alarm systems, thermostats, CCTV cameras, etc. and thus has been around for a while now. However, when hitting the consumer market, according to the National Intelligence Council, IoT will be a disruptive technology by 2025. And by 2020, there will be tens of billions of Internet-enabled devices that generate global revenue of more than $8 trillion.

A good example for an IoT device that has hit the mass market is the Nest thermostat, transforming a traditional user-operated thermostat into an intelligent sensing device that adjusts the temperature based on observed user behavior.

While IoT shows amazing promise, it comes with serious security implications that accompany wide-scale IoT deployment. According to the National Security Telecommunications Advisory Committee (NSTAC), “There is a small — and rapidly — closing window to ensure that IoT is adopted in a way that maximizes security and minimizes risk. If the country fails to do so, it will be coping with the consequences for generations.”

For instance, the Nest can easily be hacked, even if physical access is needed. In fact, the first IoT botnets were discovered in 2013.

Is it safe?

Can IoT ever be safe? And what threats are lurking in IoT? Is it even possible to create an IoT infrastructure that is protected from intrusions and internal misuse? The multitude of IoT devices, all with unique software stacks, will expand traditional attack surfaces used by hackers in two directions.

First, there will be many more devices in the network and these devices are likely to be more vulnerable because they have limited, outdated, or no security software running on them. And besides being an easy entry point into the internal network, the limited software stack of miniature devices represents a perfect hiding place for malicious code and thus a permanent backdoor into the network.

WHAT READERS LIKE
Short Take: Apple Music, Siri integration and other WWDC thoughts
4 ways to screw up a videoconference
Insights into the Apple Watch
Second, increased device density also generates more key assets that are susceptible to theft. More behavior data is tracked and visible through miniature devices, such as presence and absence patterns. With IoT, it’s not so much about credit card numbers but rather stealing rich data records and behavior information.

This is aligned with major privacy issues that arise when dealing with IoT. An infinite number of small devices can observe the user from every possible angle, and almost any information can be derived from this.

For example, as mentioned above, the Nest hack will not only enable the manipulation of the room temperature settings, but more importantly, Nest derives future patterns of absence of inhabitants that can easily be used for criminal activity such as burglary.

In addition to external threats, as we have seen for BYOD policies, increased device density creates opportunities for data theft and even sabotage from the inside. Disgruntled employees that turn against their employers can destroy and steal data that they have direct access to, as well as download confidential data about user behavior and manipulate the IoT infrastructure to do harm.

Considerations for practicing safe IoT

Practicing safe IoT is not a panacea and there are no guarantees it will protect everything. It might be unrealistic to use classic on-device security (think antivirus software on your laptop) to protect a multitude of miniature devices. There are only a few other options.

One could run the whole IoT network in a controlled and secured environment, with no or very limited access to the Internet — this is effectively the best practice for networked Industrial Control Systems (ICS) such as nuclear power plants and factory floors. But this approach won’t work with most IoT devices because they require access to data from the Internet to deliver their designed value. It also runs the risk of hostile intrusions that can cause serious damage.

Consequently, most security specialists opt for a behavior monitoring solution. Instead of securing the network and any IoT device on it by restrictive policies, the communication paths are kept open, but all behavior is closely monitored. “The system watches all devices, learns what’s the norm, and flags abnormal behavior,” noted Symantec’s Jeffrey Green in a PC Magazine article.

Recent advances in data science will enable the construction of narratives from behavior anomalies and indicators and thus keep false positives under control. Let’s hope that this will help create the secure and privacy-preserving network infrastructures while still allowing the promise of IoT to flourish.

http://www.computerworld.com/article/2937976/internet-of-things/is-your-thermostat-spying-on-you-cyber-threats-and-iot.html

Regina Police Service asks the city’s iPhone users to stop asking Siri about 9/11

As far lifeless robots go, Siri has a great sense of humour. Ask her when the world will end, or whether she follows the three laws of robotics and you’re bound to get an amusing response. Hell, she even takes people saying “Okay, Google” and “Okay, Glass” to her face in stride.

That said, it’s definitely not a good idea to ask her about 9/11, as several hundred people from Regina learned this weekend.

In the span of two hours on Sunday morning, Regina’s 911 call centre received 114 hangup calls. According to the city’s police department, it’s all because a rumour started to circulate on websites like Twitter and Tumblr, suggesting that Siri had an amusing response to queries about the September 11th terrorist attacks in 2001. In reality, Apple’s personal assistant simply calls 911. This came to a shock to many people, who hung up their phones when Siri started dialling 911 for them.

The problem is that 911 call operators are required by law to call back someone that hangs up on them before they’re able to speak. Indeed, they’re required to call multiple times.

This lead to the city’s emergency services being tied up for several hours as they tried to follow up on every errant call. It was enough to force the Regina Police Department to issue a press release, requesting that people “be good citizens and not ask Siri about 9/11″.

Not a shining moment for iPhone users, but all’s well that ends well, I suppose.

Regina Police Service asks the city’s iPhone users to stop asking Siri about 9/11

Google’s ‘undo’ feature may be godsend for Gmail users who regret hitting ‘send’

SHERYL UBELACKER, THE CANADIAN PRESS 06.23.2015

The undo send button is shown on a Gmail account as employees work on their computers in Toronto on Wednesday, June 24, 2015. Gmail has implemented a new feature that allows people to undo a sent email. THE CANADIAN PRESS/Nathan Denette
SHARE
ADJUST
COMMENT
PRINT
TORONTO – It might be called “send regret,” that panic that sets in after firing off an email or text that you suddenly realize was inappropriate, addressed to the wrong person, or just plain wrong.

Google is trying to save Gmail users from their own misguided missives by granting them a window of sober second thought.

Gmail users can now add the “undo send” feature to their accounts and give themselves up to 30 seconds to recall an ill-conceived email.

“‘Undo’ has saved my bacon professionally and personally more than once,” Google Canada spokesman Aaron Brindle enthused Wednesday.

“I’ve been known to press send a little too quickly. Sometimes, it’s as simple as fixing a typo or making sure you’re not replying-all, which can be awkward.”

The feature has been available to beta-testers of Gmail since 2009 and proved so popular that Google decided to roll it out this week for all its 900 million-plus users worldwide. It has also been an option for users of “Inbox,” an enhanced Gmail application.

“I think everybody is kind of vulnerable to this,” Brindle said of fretting after a fast-fingered click on the send button.

“Anything we can do to alleviate those sudden moments of panic when you’re using one of our products is a good thing.”

Aimee Morrison, an English professor who specializes in new media studies at the University of Waterloo, said other email providers will likely have to follow Google’s lead and add the undo feature to their programs. Users will demand it, she predicted.

“I think we’ve all had that experience of clicking send, then repenting the decision,” said Morrison, recalling how an acquaintance once sent a business email intending to sign off with “Best regards.”

The sender accidentally typed a “t” instead of a “g” in regards, but realized the mistake too late.

“There’s nothing you can do, it’s gone. It’s like dropping it into the mailbox on the corner and hearing the lid slam.”

Brindle said activating Gmail’s undo-send feature is simple.

— In the web-based version of Gmail, tap on the “settings” icon that looks like a gear.

— Select “settings” from the drop-down menu.

— Under the “general” settings tab, click on “enable undo send.” Users can choose a delay of five, 10, 20 or 30 seconds before a message is sent out and cannot be retracted.

— Click the “save changes” button at the bottom of the page.

“For me personally, I’m a 30-seconds guy,” said Brindle. “I need to reflect on it sometimes. Five seconds can be a little bit tight.”

Once “send” is pressed on an email, a yellow bar appears at the top of the screen, saying the message has been sent and offering the option to undo that transmission.

“Undo send” is a welcome Gmail addition for Murray Rowe of Toronto, who enabled the feature Wednesday after hearing about it on TV.

When Rowe wished he could have recaptured delivered emails in the past, it wasn’t because of the content of the communication, but how he’d phrased it.

“I’d just press send and then reconsider the tone,” said Rowe, 40, who left the community services sector to pursue social work studies at George Brown College.

“That’s probably 99 per cent of the time when I’ll use it, when I’m like: ‘Oh, that tone was a bit too assertive, too direct.

“On a professional level, I would say there were times where I’d think: ‘This is going into a file. Could that tone be misinterpreted?'”

And like Brindle, Rowe also opted for a half-minute window to linger over second thoughts about the email being sent.

“Five seconds is not a lot of time.”

In some cases, even half a minute won’t be enough, suggested Morrison.

“Most of these emails that we regret sending, we don’t regret within the first 30 seconds,” she said.

“What this won’t help with is waking up the next morning having sent an ill-advised email telling your parents what you really think about their religious beliefs.

“This won’t save us from emails composed in haste…. The only cure is to compose a little more at leisure and not make use of that instantaneous nature of the medium.”

Her advice: when formulating an important email — such as one applying for a job, communicating with one’s boss or dealing with an emotionally charged situation — write a draft, walk away and do something else, then come back and reread it before hitting send.

Follow @SherylUbelacker on Twitter.
http://www.calgaryherald.com/technology/Googles+undo+feature+godsend+Gmail+users+regret+hitting+send/11163334/story.html

Research sheds light on how neurons control muscle movement

New research involving people diagnosed with Lou Gehrig’s disease sheds light on how individual neurons control muscle movement in humans — and could help in the development of better brain-controlled prosthetic devices.

Studying the brain activity of two patients with Lou Gehrig’s disease has given researchers insight into how neurons control muscle movement.
Oliver Burston
Stanford University researchers studying how the brain controls movement in people with paralysis, related to their diagnosis of Lou Gehrig’s disease, have found that groups of neurons work together, firing in complex rhythms to signal muscles about when and where to move.
“We hope to apply these findings to create prosthetic devices, such as robotic arms, that better understand and respond to a person’s thoughts,” said Jaimie Henderson, MD, professor of neurosurgery.
A paper describing the study was published online June 23 in eLife. Henderson, who holds the John and Jene Blume-Robert and Ruth Halperin Professorship, and Krishna Shenoy, PhD, professor of electrical engineering and a Howard Hughes Medical Institute investigator, share senior authorship of the paper. The lead author is postdoctoral scholar Chethan Pandarinath, PhD.
The study builds on groundbreaking Stanford animal research that fundamentally has changed how scientists think about how motor cortical neurons work to control movements. “The earlier research with animals showed that many of the firing patterns that seem so confusing when we look at individual neurons become clear when we look at large groups of neurons together as a dynamical system,” Pandarinath said.
Previously, researchers had two theories about how neurons in the motor cortex might control movement: One was that these neurons fired in patterns that represent more abstract commands, such as “move your arm to the right,” and then neurons in different brain areas would translate those instructions to guide the muscle contractions that make the arm move; the other was that the motor cortex neurons would actually send directions to the arm muscles, telling them how to contract.

Krishna Shenoy

But in a 2012 Nature paper, Shenoy and his colleagues reported finding that much more is going on: Motor cortical neurons work as part of an interconnected circuit — a so-called dynamical system — to create rhythmic patterns of neural activity. As these rhythmic patterns are sent to the arm, they drive muscle contractions, causing the arm to move.
“What we discovered in our preclinical work is evidence of how groups of neurons coordinate and cooperate with each other in a very particular way that gives us deeper insight into how the brain is controlling the arm,” Shenoy said.
He and his colleagues wanted to know whether neurons fired similarly in humans.
Recording human brain activity
To conduct the study, the researchers recorded motor cortical brain activity of two research participants with the degenerative neurological condition called amyotrophic lateral sclerosis, or ALS. The condition, which also is known as Lou Gehrig’s disease, damages neurons and causes patients to lose control over their muscles.
The participants, a 51-year-old woman who retained some movement in her fingers and wrists, and a 54-year-old man who could still move one of his index fingers slightly, are participants in the BrainGate2 trial, which is testing a neural interface system allowing thoughts to control computer cursors, robotic arms and other assistive devices.
Jaimie Henderson

These participants had electrode arrays implanted in their brains’ motor cortex for the trial. That allowed researchers to record electrical brain activity from individual neurons while the participants moved or tried to move their fingers and wrists, which were equipped with sensors to record physical movement. Typically, such mapping in humans can only occur during brain surgery.
The participants’ implants provided an “opportunity to ask important scientific questions,” Shenoy said. The researchers found that the ALS patients’ neurons worked very similarly to the preclinical research findings.
Researchers now plan to use their data to improve the algorithms that translate neural activity in the form of electrical impulses into control signals that can guide a robotic arm or a computer cursor.
Other Stanford co-authors of the paper are former research associate Vikash Gilja, PhD; research assistant Christine Blabe; and postdoctoral scholar Paul Nuyujukian, MD, PhD.
The study was funded by the Stanford Institute for Neuro-Innovation and Translational Neuroscience, Stanford BioX/NeuroVentures, the Stanford Office of Postdoctoral Affairs, the Garlick Foundation, the Reeve Foundation, the Craig H. Neilsen Foundation, the National Institutes of Health (grants R01DC009899, N01HD53403 and N01HD10018), the Department of Veterans Affairs and the MGH-Deane Institute for Integrated Research on Atrial Fibrillation and Stroke.
The Department of Neurosurgery, Department of Neurology and Neurological Sciences and Department of Electrical Engineering also supported the work. Information about these departments is available at http://neurosurgery.stanford.edu, http://neurology.stanford.edu and http://ee.stanford.edu, respectively.

http://www.healthcanal.com/brain-nerves/64750-research-sheds-light-on-how-neurons-control-muscle-movement.html

Scientists create synthetic membranes that grow like living cells

June 23, 2015

Growing cell membranes are seen in this time lapse sequence (numbers correspond to minutes of duration) (credit: Michael Hardy, UC San Diego)

Chemists and biologists at UC San Diego have succeeded in designing and synthesizing an artificial cell membrane capable of sustaining continual growth, just like a living cell.

Their achievement will allow scientists to more accurately replicate the behavior of living cell membranes, which until now have been modeled only by synthetic cell membranes without the ability to add new phospholipids.

Structure of a phospholipid, a major component of all cell membranes (credit: Wikimedia Commons)

“The membranes we created, though completely synthetic, mimic several features of more complex living organisms, such as the ability to adapt their composition in response to environmental cues,” said Neal Devaraj, an assistant professor of chemistry and biochemistry at UC San Diego who headed the research team, which included scientists from the campus’ BioCircuits Institute.

“Many other scientists have exploited the ability of lipids to self-assemble into bilayer vesicles with properties reminiscent of cellular membranes, but until now no one has been able to mimic nature’s ability to support persistent phospholipid membrane formation,” he explained. “We developed an artificial cell membrane that continually synthesizes all of the components needed to form additional catalytic membranes.”

Michael Hardy | Autocatalyst Drives Vesicle Growth

A time-lapse video shows increase in vesicle volume and membrane surface area at 60 second intervals over a period of 12 hours (credit: Michael Hardy, UC San Diego)

The scientists said in their paper, published in the current issue of Proceedings of the National Academy of Sciences, that to develop the growing membrane they substituted a “complex network of biochemical pathways used in nature with a single autocatalyst that simultaneously drives membrane growth.” In this way, they added, “our system continually transforms simpler, higher-energy building blocks into new artificial membranes.”

“Our results demonstrate that complex lipid membranes capable of indefinite self-synthesis can emerge when supplied with simpler chemical building blocks,” said Devaraj. “Synthetic cell membranes that can grow like real membranes will be an important new tool for synthetic biology and origin-of-life studies.”

Support for the research was provided by UC San Diego, US Army Research Laboratory, US Army Research Office, and the National Science Foundation.

Abstract of Self-reproducing catalyst drives repeated phospholipid synthesis and membrane growth

Cell membranes are dynamic structures found in all living organisms. There have been numerous constructs that model phospholipid membranes. However, unlike natural membranes, these biomimetic systems cannot sustain growth owing to an inability to replenish phospholipid-synthesizing catalysts. Here we report on the design and synthesis of artificial membranes embedded with synthetic, self-reproducing catalysts capable of perpetuating phospholipid bilayer formation. Replacing the complex biochemical pathways used in nature with an autocatalyst that also drives lipid synthesis leads to the continual formation of triazole phospholipids and membrane-bound oligotriazole catalysts from simpler starting materials. In addition to continual phospholipid synthesis and vesicle growth, the synthetic membranes are capable of remodeling their physical composition in response to changes in the environment by preferentially incorporating specific precursors. These results demonstrate that complex membranes capable of indefinite self-synthesis can emerge when supplied with simpler chemical building blocks.

references:
Michael D. Hardy, Jun Yang, Jangir Selimkhanov, Christian M. Cole, Lev S. Tsimring, and Neal K. Devaraj. Self-reproducing catalyst drives repeated phospholipid synthesis and membrane growth. PNAS, June 22, 2015 doi: 10.1073/pnas.1506704112
related:
Scientists Create Synthetic Membranes That Grow Like Living Cells
http://www.kurzweilai.net/cocktail-of-chemicals-trigger-cancer-global-taskforce-calls-for-research-into-how-everyday-chemicals-in-our-environment-cause-cancer

Water splitter produces clean-burning hydrogen fuel 24/7

An inexpensive renewable source of clean-burning hydrogen fuel for transportation and industry
June 23, 2015

Unlike conventional water splitters, the Stanford device uses a single low-cost catalyst to generate hydrogen on one electrode and oxygen on the other (credit: L.A. Cicero/Stanford University)

In an engineering first, Stanford University scientists have invented a low-cost water splitter that uses a single catalyst to produce both hydrogen and oxygen gas 24 hours a day, seven days a week.

The researchers believe that the device, described in an open-access study published today (June 23) in Nature Communications, could provide a renewable source of clean-burning hydrogen fuel for transportation and industry.

“We have developed a low-voltage, single-catalyst water splitter that continuously generates hydrogen and oxygen for more than 200 hours, an exciting world-record performance,” said study co-author Yi Cui, an associate professor of materials science and engineering at Stanford and of photon science at the SLAC National Accelerator Laboratory.

The search for clean hydrogen

Hydrogen has long been promoted as an emissions-free alternative to gasoline. But most commercial-grade hydrogen is made from natural gas — a fossil fuel that contributes to global warming. So scientists have been trying to develop a cheap and efficient way to extract pure hydrogen from water.

A conventional water-splitting device consists of two electrodes submerged in a water-based electrolyte. A low-voltage current applied to the electrodes drives a catalytic reaction that separates molecules of H2O, releasing bubbles of hydrogen on one electrode and oxygen on the other.

In these devices, each electrode is embedded with a different catalyst, typically platinum and iridium, two rare and costly metals. But in 2014, Stanford chemist Hongjie Dai developed a water splitter made of inexpensive nickel and iron that runs on an ordinary 1.5-volt battery.

In the new study, Cui and his colleagues advanced that technology further.

Stanford University | Stanford water splitter produces clean hydrogen 24/7

A single catalyst

In conventional water splitters, the hydrogen and oxygen catalysts often require different electrolytes with different pH — one acidic, one alkaline — to remain stable and active. “For practical water splitting, an expensive barrier is needed to separate the two electrolytes, adding to the cost of the device,” Wang explained.

“Our water splitter is unique because we only use one catalyst, nickel-iron oxide, for both electrodes,” said graduate student Haotian Wang, lead author of the study. “This bi-functional catalyst can split water continuously for more than a week with a steady input of just 1.5 volts of electricity. That’s an unprecedented water-splitting efficiency of 82 percent at room temperature.”

Wang and his colleagues discovered that nickel-iron oxide, which is cheap and easy to produce, is actually more stable than some commercial catalysts made of expensive precious metals.

The key to making a single catalyst possible was to use lithium ions to chemically break the metal oxide catalyst into smaller and smaller pieces. That “increases its surface area and exposes lots of ultra-small, interconnected grain boundaries that become active sites for the water-splitting catalytic reaction,” Cui said. “This process creates tiny particles that are strongly connected, so the catalyst has very good electrical conductivity and stability.”

Lower cost

Using one catalyst made of nickel and iron also has significant implications in terms of cost.

“Not only are the materials cheaper, but having a single catalyst also reduces two sets of capital investment to one,” Cui said. “We believe that electrochemical tuning can be used to find new catalysts for other chemical fuels beyond hydrogen. The technique has been used in battery research for many years, but it’s a new approach for catalysis. The marriage of these two fields is very powerful. ”

“Our group has pioneered the idea of using lithium-ion batteries to search for catalysts,” Cui said. “Our hope is that this technique will lead to the discovery of new catalysts for other reactions beyond water splitting.”

Support was provided by the Global Climate and Energy Project at Stanford and the Stanford Interdisciplinary Graduate Fellowship program.

Abstract of Bifunctional non-noble metal oxide nanoparticle electrocatalysts through lithium-induced conversion for overall water splitting

Developing earth-abundant, active and stable electrocatalysts which operate in the same electrolyte for water splitting, including oxygen evolution reaction and hydrogen evolution reaction, is important for many renewable energy conversion processes. Here we demonstrate the improvement of catalytic activity when transition metal oxide (iron, cobalt, nickel oxides and their mixed oxides) nanoparticles (~20 nm) are electrochemically transformed into ultra-small diameter (2–5 nm) nanoparticles through lithium-induced conversion reactions. Different from most traditional chemical syntheses, this method maintains excellent electrical interconnection among nanoparticles and results in large surface areas and many catalytically active sites. We demonstrate that lithium-induced ultra-small NiFeOx nanoparticles are active bifunctional catalysts exhibiting high activity and stability for overall water splitting in base. We achieve 10 mA cm−2 water-splitting current at only 1.51 V for over 200 h without degradation in a two-electrode configuration and 1 M KOH, better than the combination of iridium and platinum as benchmark catalysts.

references:
Haotian Wang, Hyun-Wook Lee, Yong Deng, Zhiyi Lu, Po-Chun Hsu, Yayuan Liu, Dingchang Lin & Yi Cui. Bifunctional non-noble metal oxide nanoparticle electrocatalysts through lithium-induced conversion for overall water splitting. Nature Communications, 2015; 6: 7261 DOI: 10.1038/ncomms8261 (open access)
related:
Single-catalyst water splitter from Stanford produces clean-burning hydrogen 24/7
http://www.kurzweilai.net/cocktail-of-chemicals-trigger-cancer-global-taskforce-calls-for-research-into-how-everyday-chemicals-in-our-environment-cause-cancer

Cocktail of chemicals may trigger cancer

Fifty chemicals the public is exposed to on a daily basis may trigger cancer when combined, according to new research by global task force of 174 scientists
June 23, 2015

Disruptive potential of environmental exposures to mixtures of chemicals (credit: William H.Goodson III et al./Carcinogenesis)

A global task force of 174 scientists from leading research centers in 28 countries has studied the link between mixtures of commonly encountered chemicals and the development of cancer. The open-access study selected 85 chemicals not considered carcinogenic to humans and found 50 of them actually supported key cancer-related mechanisms at exposures found in the environment today.

According to co-author cancer Biologist Hemad Yasaei from Brunel University London, “This research backs up the idea that chemicals not considered harmful by themselves are combining and accumulating in our bodies to trigger cancer and might lie behind the global cancer epidemic we are witnessing. We urgently need to focus more resources to research the effect of low dose exposure to mixtures of chemicals in the food we eat, air we breathe, and water we drink.”

Professor Andrew Ward from the Department of Biology and Biochemistry at the University of Bath, who contributed in the area of cancer epigenetics and the environment, said: “A review on this scale, looking at environmental chemicals from the perspective of all the major hallmarks of cancer, is unprecedented”.

Professor Francis Martin from Lancaster University who contributed to an examination of how such typical environmental exposures influence dysfunctional metabolism, pointed out that despite a rising incidence of many cancers, “far too little research has been invested into examining the pivotal role of environmental causative agents. This worldwide team of researchers refocuses our attention on this under-researched area.”

In light of the compelling evidence, the taskforce is calling for an increased emphasis on and support for research into low dose exposures to mixtures of environmental chemicals. Current research estimates chemicals could be responsible for as many as one in five cancers. With the human population routinely exposed to thousands of chemicals, the effects need to be better understood to reduce the incidence of cancer globally, the scientist say.

The research was published in Oxford University Publishing’s Carcinogenesis journal today (June 23).

William Goodson III, a senior scientist at the California Pacific Medical Center in San Francisco and lead author of the synthesis said: “Since so many chemicals that are unavoidable in the environment can produce low dose effects that are directly related to carcinogenesis, the way we’ve been testing chemicals (one at a time) is really quite out of date. Every day we are exposed to an environmental ‘chemical soup’, so we need testing that evaluates the effects of our ongoing exposure to these chemical mixtures.”

Abstract of Assessing the carcinogenic potential of low-dose exposures to chemical mixtures in the environment: the challenge ahead

Lifestyle factors are responsible for a considerable portion of cancer incidence worldwide, but credible estimates from the World Health Organization and the International Agency for Research on Cancer (IARC) suggest that the fraction of cancers attributable to toxic environmental exposures is between 7% and 19%. To explore the hypothesis that low-dose exposures to mixtures of chemicals in the environment may be combining to contribute to environmental carcinogenesis, we reviewed 11 hallmark phenotypes of cancer, multiple priority target sites for disruption in each area and prototypical chemical disruptors for all targets, this included dose-response characterizations, evidence of low-dose effects and cross-hallmark effects for all targets and chemicals. In total, 85 examples of chemicals were reviewed for actions on key pathways/mechanisms related to carcinogenesis. Only 15% (13/85) were found to have evidence of a dose-response threshold, whereas 59% (50/85) exerted low-dose effects. No dose-response information was found for the remaining 26% (22/85). Our analysis suggests that the cumulative effects of individual (non-carcinogenic) chemicals acting on different pathways, and a variety of related systems, organs, tissues and cells could plausibly conspire to produce carcinogenic synergies. Additional basic research on carcinogenesis and research focused on low-dose effects of chemical mixtures needs to be rigorously pursued before the merits of this hypothesis can be further advanced. However, the structure of the World Health Organization International Programme on Chemical Safety ‘Mode of Action’ framework should be revisited as it has inherent weaknesses that are not fully aligned with our current understanding of cancer biology.

references:
W. H. Goodson, L. Lowe, et al. Assessing the carcinogenic potential of low-dose exposures to chemical mixtures in the environment: the challenge ahead. Carcinogenesis, 2015; 36 (Suppl 1): S254 DOI: 10.1093/carcin/bgv039 (open access)
related:
Global taskforce calls for research into everyday chemicals that may cause cancer
http://www.kurzweilai.net/cocktail-of-chemicals-trigger-cancer-global-taskforce-calls-for-research-into-how-everyday-chemicals-in-our-environment-cause-cancer

SEARCH BY TOPIC Video Accessibility Video SEO User Engagement Product Updates Case Studies Events How To Company + Culture Industry Trends Lighter Side WATCH AN OVERVIEW VIDEO CONNECT WITH US RSS Feed Twitter Facebook SUBSCRIBE BY EMAIL Get a monthly blog digest email. *Email Address: Subscribe TAGS a11y ADA best practices Brightcove captioning caption regulations captions caption standards caption workflow closed captioning closed captions CVAA e-learning Education ESL FCC free government higher education interactive transcript legislation online learning online lectures Online Video partners quality standards Rehabilitation Act Section 508 SEO speech recognition subtitles sync captions transcription translation translation service video accessibility video captioning video player video search video SEO video transcription video transcription tool web accessibility webinar YouTube FUTURE OF ACCESSIBILITY AND VIDEO CAPTIONS ACCORDING TO GOOGLE AND YOUTUBE

Streaming Media West is a conference where digital media experts and professionals exchange ideas, strategies and success stories. The session Best Practices for Implementing Accessible Video Captioning brought together speakers from Google, Dell, and T-Mobile to exchange approaches to video captioning, mobile video, and video translation.
Google’s video platform, YouTube, is in a unique position of captioning a very broad range of content–much of which they don’t own. Brad Ellis, a YouTube Product Manager at Google, offered insights on the future of accessible video, why universal accessibility is important to Google, and how captions enhance content discoverability.
About YouTube
YouTube is the largest online video platform and video sharing community in the world. Created in 2005 and acquired by Google in 2006, the company is based in San Bruno, California. YouTube uses Adobe Flash Video and HTML5 to stream a wide variety of user-generated video content, TV clips, music videos, video blogs, original video shorts, and educational videos. While most YouTube content is uploaded by individuals, media corporations like CBS, BBC, Vevo, and Hulu offer their materials through the YouTube partnership program.
Why Google Captions Video
Google’s mission is to make all information universally accessible. Ellis and his team have embraced this directive by attempting to remove barriers to video captioning. Whether using captions to assist those with hearing disabilities, or because translated captions help international audiences, video accessibility is a priority to YouTube from the top down.
Building Accessibility into YouTube
YouTube has more than 1 billion unique visitors each month. So how does YouTube approach captioning on such a mind-numbingly large scale? “The captions team at YouTube doesn’t actually go in and type in captions for anything. But we build a platform that allows anybody to upload captions in 20-plus different formats and then display those captions on all YouTube players,” says Ellis. “We also build tools for people who are creating captions for their content on their own to easily and quickly create captions for their videos. Our goal is simply to make every video understandable to every user. A very long-term goal, but that’s what we’re aspiring to.”
YouTube’s Automatic Captions
YouTube’s automatic captions have received quite a bit of flack—not because they are error prone, but because some video creators mistakenly think they’re good enough to accommodate deaf users. We commend Google for taking up the mantle of web accessibility and being a model for other global media companies. After all, Google knows YouTube auto captions are not perfect, “We know there are issues. But going back to our long-term goal of making every video understandable to every user, technology is the only way that we can scale,” says Ellis. “With over 80 hours of video uploaded to YouTube every minute, technology is a necessary aid to help as many creators as possible to add video captions.”
Ellis echoes the FCC’s interpretation of the 21st Century Communications and Video Accessibility Act (CVAA), which puts the legal responsibility to caption video on the video owner. “It’s ultimately up to the content creator. If the video uploader did not add captions themselves, we do the best we can to help somebody who needs captions. It’s not perfect, and we have a lot of errors…we’re doing more work to make this better and better.”
Why YouTube Video Creators Should Caption Video, According to Google
Ellis says that YouTube creators receive a lot of requests from fans and friends to add captions. Yet, even with free tools, many creators feel like it is too much effort. Ellis questions the legitimacy of this argument: “I really think that it’s not as hard as people make it out to be. It can be really hard…but perfection is the enemy of good in this case. Something is better than nothing.”
“Making video accessible to people who need captions is really important. And I want to encourage everybody who has power over this to make their videos accessible, to add captions, and to focus not on excuses or reasons why not to do it or what’s required, but how you can have the biggest impact and reach the most people,” says Ellis, further promoting Google’s mission.
YouTube Videos and a Worldwide Audience: The Case for Translation and Subtitles
Streaming in 61 countries and across 61 languages, YouTube has an international presence that global brands should take advantage of. A huge part of the motivation behind Google and YouTube’s push for captioned video content is that captions and transcripts are the starting point for translation. “We actually have 80% of views on YouTube coming from outside of the United States. That’s huge, and a lot of that is non-English. Translating captions is very important. It’s a huge opportunity for growth. We see huge demand from non-English uploaders as well to get their content translated,” says Ellis.
How Professional Quality Captions Improve Google Search Rankings and Video SEO
As we discussed in the blog article: How to Maximize YouTube Viewership with Transcripts & Captions, captions are an important tool for video SEO and YouTube visibility. Because of errors, automatic captions are not indexed by Google. The good news is that if you take the extra effort to add professional quality captions to YouTube, the text does get indexed, allowing more potential viewers to find your video content. Ellis explains: “We don’t use the automatic captions today. I hope that we will down the road, but it’s a trade-off with the quality. If you upload captions yourself to any YouTube video, we do index that. That is searched.”
Video captions also increase channel engagement–not just in video views, but the total time watched. As Ellis explains, “We did an experiment with one partner a year ago and saw just by captioning videos–they were English videos with English captions–we did a scientific A/B test and saw a 4% increase in traffic in views and watch time on YouTube. Imagine what that could be if you’re making it accessible in more languages.”
Mobile Video, Captions “Pass-Through” and Google Technology
Ellis recently moved back to the states after living in Tokyo for the past six years. When he first arrived to Japan, Ellis saw that mobile technology was further ahead than in the U.S. People were watching video on tiny flip phone screens! Because of the demand for video anytime, anywhere, it’s not surprising that more than half of the traffic in Japan and Korea comes from mobile.
Ellis recognizes the need for mobile flexibility and accessibility stateside. The Google team is hard at work to allow captions to “pass through” on any device. A big challenge with small screens is where to position captions without obstructing the video. “Positioning captions–to show speaker identification–should work on all devices, any color, or anything else specified.”
Ellis also addresses the challenge of maintaining compatibility across different hardware and software: “I think that one of the difficulties is the fragmentation that we have in the markets. We have so many different phones and so many different versions of operating systems, applications, mobile web browsers. But we’re seeing everybody catch up. In the long term, I’m looking forward to the day where we say 100% of all places support captions, and everybody will be able to watch a video with captions no matter where they watch it.”
– See more at: http://www.3playmedia.com/2014/01/10/future-accessibility-video-captions-according-google-youtube/#sthash.K1RgCuVG.dpuf