http://www.valuewalk.com/2016/05/check-friends-iphone-battery-status-battery-share/

How To Check Your Friends’ iPhone Battery Status With Battery Share

Like all app stores Apples is no different, it is full of apps trying to be better than other apps, but all missingthe point! Users want something different, something that is not only unique, but also down-right useful. Onesuch app that ticks all of those boxes is Battery Share.

Battery Share the iOS battery monitoring App

No, it’s not just another battery app, which is going to calibrate or tell you what apps to shut down, this onedoes something completely different!.

First off this is a universal iPhone and iPad app, that provides you with notifications when a loved one orfriends battery is about to die on either his / her iPhone or iPad. This app is the brain child of Terry Demco, Iwould personally like to thank him for providing a genuinely useful battery app for iOS!

How does Battery Share work?

Being able to keep track of a loved one’s battery status, will be seen as a hugely important feature by many aparent worried about keeping in touch with their children. And now thanks to Battery Share what onceseemed like the impossible is easy to do.

Within the app there is a tabled list of people, who share the status of their iOS device battery with you. Thisalong with a little animated battery icon keeps you updated “in real time” about their battery status.

Battery Share

Getting more detail about an individual battery status is simple, just by tapping on a specific connected userwill fetch that user’s information to the top of the interface. This then tells you specific information about thelevel and how long it could last. If an emergency, such as critical low battery is detected. Battery Shareallows you to send a notification directly to the iPad or iPhone warning them of the low battery level.

Voip Calls in a Battery App

Remember the developer I praised earlier in this article? Terry Demco… not only did he create Battery Share,with its monitoring and warning notification features. He has also included the ability to make VoIP (voiceover internet protocol) calls from directly within the app!

So, not only can you message and monitor a specific connected user’s battery life, but if the need everoccurs, you can also contact them via VoIP. Personally I wish this app had existed 5-10 years ago!

User Interface

With all of those features, you’d probably expect this app to be expensive? Well it isn’t, at a lowly $0.99 it’srelatively hard to imagine how this could not be a serious big hit with parents and untrusting partners all overthe World. The apps user-interface however, and if I’m honest, does look what I’d expect from a cheap app.However, I think the functionality of it outweighs the need for a beautiful design.

Final Thoughts

To get Battery Share to work, you’re going to need an iPhone or iPad running on a minimum of iOS 9.1. Howaccurate that is I’m not sure, but if you want an app that will serve a great purpose then the chance of losing$0.99 is well worth the risk.

http://www.labmanager.com/news/2016/05/scientists-watch-bacterial-sensor-respond-to-light-in-real-time#.Vy2GFRXysmI

Scientists Watch Bacterial Sensor Respond to Light in Real Time

LCLS X-ray laser takes snapshots of important reaction 1,000 times faster than ever before

This illustration depicts an experiment at SLAC that revealed how a protein from photosynthetic bacteria changes shape in response to light in less than a trillionth of a second. Samples of the crystallized protein (right), called photoactive yellow protein, or PYP, were struck by an optical laser beam (blue light coming from left) that triggers shape changes in the protein. These were then probed with a powerful X-ray beam (fiery beam from bottom left) from SLAC’s LCLS.IMAGE CREDIT: SLAC NATIONAL ACCELERATOR LABORATORY

MENLO PARK, CALIF. A number of important biological processes, such as photosynthesis and vision, depend on light. But it’s hard to capture responses of biomolecules to light because they happen almost instantaneously.

Now, researchers have made a giant leap forward in taking snapshots of these ultrafast reactions in a bacterial light sensor. Using the world’s most powerful X-ray laser at the Department of Energy’s SLAC National Accelerator Laboratory, they were able to see atomic motions as fast as 100 quadrillionths of a second—1,000 times faster than ever before.

Further, “We’re the first to succeed in taking real-time snapshots of an ultrafast structure transition in a protein, in which a molecule excited by light relaxes by rearranging its structure in what is known as trans-to-cisisomerization,” says the study’s principal investigator, Marius Schmidt from the University of Wisconsin-Milwaukee.

The technique could widely benefit studies of light-driven, ultrafast atomic motions. For example, it could reveal:

  • How visual pigments in the human eye respond to light, and how absorbing too much of it damages them.
  • How photosynthetic organisms turn light into chemical energy—a process that could serve as a model for the development of new energy technologies.
  • How atomic structures respond to light pulses of different shape and duration—an important first step toward controlling chemical reactions with light.

“The new data show for the first time how the bacterial sensor reacts immediately after it absorbs light,” says Andy Aquila, a researcher at SLAC’s Linac Coherent Light Source (LCLS), a DOE Office of Science User Facility. “The initial response, which is almost instantaneous, is absolutely crucial because it creates a ripple effect in the protein, setting the stage for its biological function. Only LCLS’s X-ray pulses are bright enough and short enough to capture biological processes on this ultrafast timescale.” The results were published May 5 in Science.

High-speed X-ray Camera Reveals Extremely Fast Biology

The team looked at the light-sensitive part of a protein called “photoactive yellow protein,” or PYP. It functions as an “eye” in purple bacteria, helping them sense blue light and stay away from light that is too energetic and potentially harmful.

The researchers had already studied light-induced structural changes in PYP at LCLS, revealing atomic motions as fast as 10 billionths of a second. By tweaking their experiment, they were now able to improve their speed limit 100,000 times and capture reactions in the protein that are 1,000 times faster than any seen in an X-ray experiment before.

Both studies followed a very similar approach: At LCLS, the team sent a stream of tiny PYP crystals into a sample chamber. There, each crystal was struck by a flash of optical laser light and then an X-ray pulse, which took an image of the protein’s structural response to the light. By varying the time between the two pulses, scientists were able to see how the protein morphed over time.

Since LCLS’s X-ray pulses are extremely short, lasting only a few quadrillionths of a second, they can in principle probe processes on that very timescale—but only if the optical laser also matches the tremendous speed. For the new experiment, the team replaced the old optical laser with a new one whose pulses were 100 quadrillionths of a second long—100,000 times shorter than before and much closer to the X-ray pulse length.

The researchers also applied better timing tools to measure the relative arrival time between the optical and X-ray laser pulses, enhancing the ability to precisely track ultrafast events.

“These improvements allowed us to see what no one has ever directly seen before,” Schmidt says.

Other institutions involved in the study were: Center for Free-Electron Laser Science/Deutsches Elektronen-Synchrotron, Germany; Imperial College, UK; University of Jyväskylä, Finland; Arizona State University; Max Planck Institute for Structure and Dynamics of Matter, Germany; State University of New York at Buffalo; University of Chicago; Lawrence Livermore National Laboratory; and University of Hamburg, Germany. Funding sources included: National Science Foundation; National Institutes of Health; Helmholtz Association; German Federal Ministry of Education and Research; Engineering and Physical Sciences Research Council; Academy of Finland; and the European Union.

http://www.slashgear.com/nasa-makes-56-patents-public-domain-launches-searchable-database-06439105/

NASA makes 56 patents public domain, launches searchable database

NASA has released a bunch of patents for its technologies so that anyone can use them. A total of 56 “formerly-patented” technologies developed by the government are now available in the public domain, meaning they can be used for commercial purposes in an unrestricted manner. To make it easier to find these technologies and others like them, NASA has also created a new searchable database that links the public to thousands of the agency’s now-expired patents.

According to NASA, the patents it has released may have non-aerospace applications that could help companies with commercial projects underway. By using these already established technologies, companies can save a lot of time and money by sidestepping the need to create their own alternatives or pay out hefty sums in licensing deals.

Of the 56 formerly-patented technologies, users will find things like methods of propulsion, thrusters, rocket nozzles, advanced manufacturing processes, and more. As well, the space agency says that commercial space companies may be able to better “familiarize” themselves with NASA’s capabilities and, perhaps, doing so will lead to more collaborations between the space agency and private companies.

The 56 public domain patents are only a small piece of NASA’s easily accessible data, though — there are tons of NASA patents that have expired, making the technology contained within public domain for unrestricted use. You can now more easily find those patents using NASA’s searchable database (here).

Said NASA executive Daniel Lockney:

By making these technologies available in the public domain, we are helping foster a new era of entrepreneurship that will again place America at the forefront of high-tech manufacturing and economic competitiveness. By releasing this collection into the public domain, we are encouraging entrepreneurs to explore new ways to commercialize NASA technologies.

http://phys.org/news/2016-05-human-languages.html

Teaching computers to understand human languages
May 6, 2016
Teaching computers to understand human languages
Researchers at the University of Liverpool have developed a set of algorithms that will help teach computers to process and understand human languages.

Whilst mastering natural language is easy for humans, it is something that computers have not yet been able to achieve. Humans understand language through a variety of ways for example this might be through looking up it in a dictionary, or by associating it with words in the same sentence in a meaningful way.
The algorithms will enable a computer to act in much the same way as a human would when encountered with an unknown word. When the computer encounters a word it doesn’t recognise or understand, the algorithms mean it will look up the word in a dictionary (such as the WordNet), and tries to guess what other words should appear with this unknown word in the text.
Semantic representation
It gives the computer a semantic representation for a word that is both consistent with the dictionary as well as with the context in which it appears in the text. In order to know whether the algorithm has provided the computer with an accurate representation of a word it compares similarity scores produced using the word representations learnt by the computer algorithm against human rated similarities.
Liverpool computer scientist, Dr Danushka Bollegala, said: “Learning accurate word representations is the first step towards teaching languages to computers.”
“If we can represent the meaning for a word in a way a computer could understand, then the computer will be able to read texts on behalf of humans and perform potentially useful tasks such as translating a text written in a foreign language, summarising a lengthy article, or find similar other documents from the Internet.
“We are excitingly waiting to see the immense possibilities that will be brought about when such accurate semantic representations are used in various language processing tasks by the computers.”

 

http://www.technewstoday.com/29558-bentleys-apple-watch-app-will-change-your-driving-experience/

Bentley’s Apple Watch App Will Change Your Driving Experience

Controlling your car with a smartphone is too old school; Bentley wants you to do that with a smartwatch. The auto maker recently joined the list of such manufacturers, with its new app for its latest SUV, Bentayga that works on the Apple Watch, reported The Verge.

Smartwatches are clearly a more efficient and productive way to do various tasks, for which we earlier needed smartphones. This is why many automakers added smartwatch support to their cars’ apps, which are originally made for handsets. These applications let the watch do a myriad of tasks such as find the car’s location, start it or even check if the pressure in the tires is alright.

However, as the publication points out, Bentley has taken a different approach. It lets users control all the features which are there to add to users’ comfort. Bentayga drivers can control the temperature, music player, ventilators, seat heaters, and even the massage system. These features can be personalized by every passenger according to their needs. The app also provides statistics like distance traveled, speed, and duration.

The idea is commendable, as it doesn’t only add to convenience for the driver but all the passengers as well. Bentley explained how the Apple Watch can be connected via Bluetooth and used as the SUV’s Touch Screen Remote, which gives them access to all the features.

Bentley claims the application is focused upon adding to the comfort of passengers individually. However, we believe that this app still misses out on many features that are there in other apps by rival automakers, especially the ones which add to drivers experience and ease.

Rival apps such as ‘Tesla Remote S’ on Apple Watch, allows drivers to honk and even flash lights. Volkswagen allows drivers to check if they had closed the car windows via Car-net. Hyundai Blue Link will even help to avoid speed tickets, start the engine and find the car’s location.

http://www.wsj.com/articles/review-google-keyboard-for-android-makes-one-handed-typing

Review: Google Keyboard for Android Makes One-Handed Typing Easy

A better way to type, for big hands and small hands alike

Finally, you don’t have to use two hands to text on an Android phablet. WSJ’s Nathan Olivarez-Giles reviews the Google Keyboard app.Photo/Video: Emily Prapuolenis/The Wall Street Journal

As the Android world awaits N, the latest version of the mobile operating system,Alphabet Inc.’s Google released some cool new phone software as a sort of consolation: anall-new Android keyboard geared for one-handed typing.

Phablets may be great for watching video, reading email, surfing the Web and choosingthe right filter for your group selfie. But these same massive displays are terrible when itcomes to that all-important activity, typing.

The new Google Keyboard app, rolling out now in the Google Play app store, addressesthat need. Once installed, you give a long press to the keyboard’s comma key, and up popsa green phone-in-hand icon. Slide up to it, and the keyboard squeezes over to the right.You can pin the skinnier keyboard to the left as well, so this will work for both rightiesand lefties.

The Google Keyboard app also lets you adjust the keyboard's height.ENLARGE
The Google Keyboard app also lets you adjust the keyboard’s height. PHOTO: EMILY PRAPUOLENIS/THE WALL STREETJOURNAL

It’s automatic relief for anyone who knows what it’s like to have to awkwardly twist yourhand around a phablet to reach those edge keys. I often worry that I’m going to drop aphablet when I’m pecking out emails with one hand. With Google’s keyboard in use, thatstress is gone, and the effect was even more dramatic when smaller-handed friends andco-workers tried the new Google Keyboard.

Even though the one-handed version of the keyboard is, by design, shrunk down, it stillworks great with gesture typing: Sweep your fingers across keys to spell out wordsquickly, while keeping the phone stable in your hand.

Another nice ergonomic feature: You can adjust the keyboard’s height in the app’ssettings, making it taller or shorter as needed. You can also turn on or off borders aroundkeys—some people may want clear targets for where their fingers should land.

Keys with borders or not, your choice.ENLARGE
Keys with borders or not, your choice. PHOTO: EMILY PRAPUOLENIS/THE WALL STREET JOURNAL

Another great new precision feature: When typing a word, hold down your finger on thekeyboard’s space bar and drag it left or right. This brings up a cursor that lets you movebetween letters. Since you can only drag the cursor the width of the space bar, it’s betterfor moving through a few words than a complete sentence.

Most Android updates only come to certain phones on certain carriers. But the fact thatthis keyboard is served up as its own individual app means anyone running Android 4.0Ice Cream or newer, with access to the Google Play app store, can get it. (If you have aNexus phone, just update the already installed Google Keyboard app to get the newperks.) So, if you’re an Android user who has a tough time reaching all the keys when youtype, give Google Keyboard a shot.

http://www.kurzweilai.net/a-robot-for-soft-tissue-surgery-outperforms-surgeons

A robot for ‘soft tissue’ surgery outperforms surgeons

Let’s say you’re having intestinal surgery. Which do you choose: human or robot surgeon?
May 4, 2016

The STAR robot suturing intestinal tissue (credit: Azad Shademan et al./Science Translational Medicine)

Can a robot handle the slippery stuff of soft tissues that can move and change shape in complex ways as stitching goes on, normally requiring a surgeon’s skill to respond to these changes to keep suturing as tightly and evenly as possible?

A Johns Hopkins University and Children’s National Health System research team decided to find out by using their  “Smart Tissue Autonomous Robot” (STAR) to perform in a procedure called anastomosis* (joining two tubular structures such as blood vessels together), using pig intestinal tissue.

The researchers published the results today in an open-access paper in the journal Science Translational Medicine. The robot surgeon took longer (up to 57 minutes vs. 8 minutes for human surgeons) but “the machine does it better,” according to Peter Kim, M.D., Professor of Surgery at the Sheikh Zayed Institute for Pediatric Surgical Innovation, Children’s National Health System in Washington D.C. Kim said the procedure was about  60 percent fully autonomous and 40 percent supervised (“we made some minor adjustments”), but that it can be made fully autonomous.

“The equivalent of a fancy sewing machine”

Automating soft tissue surgery. Left: The STAR system integrates near-infrared fluorescent (NIRF) imaging of markers (added by the surgeon to allow STAR to track surgical motions through blood and tissue occlusions), 3D plenoptic vision (captures the intensity and direction of the light rays emanating from the markers), force sensing, submillimeter positioning, and actuated surgical tools. Right: surgical site detail during linear suturing task showing a longitudinally cut porcine intestine suspended by five stay sutures. (credit: Azad Shademan et al./Science Translational Medicine)

STAR was developed by Azad Shademan and associates at the Sheikh Zayed Institute. It features a 3D imaging system and a near-infrared sensor to spot fluorescent markers along the edges of the tissue to keep the robotic suture needle on track. Unlike most other robot-assisted surgical systems, such as the Da Vinci Si, it operates without human hands-on guidance (but under the surgeon’s supervision).

In the research, the STAR robotic sutures were compared with the work of five surgeons completing the same procedure using three methods: open surgery, laparoscopic, and robot assisted surgery. Researchers compared consistency of suture spacing, pressure at which the seam leaked, mistakes that required removing the needle from the tissue or restarting the robot, and completion time.

The system promises to improve results for patients and make the best surgical techniques more widely available, according to the researchers. Putting a robot to work in this form of surgery “really levels the playing field,” said Simon Leonard, a computer scientist an assistant research professor in the Johns Hopkins Whiting School of Engineering, who worked for four years to program the robotic arm to precisely stitch together pieces of soft tissue.

As Leonard put it, they’re designing an advanced surgical tool, “the equivalent of a fancy sewing machine.”

* Anastomosis is performed more than a million times a year in the U.S.; more than 44.5 million such soft-tissue surgeries are performed in the U.S. each year. According to the researchers, complications such as leakage along the seams occur nearly 20 percent of the time in colorectal surgery and 25 to 30 percent of the time in abdominal surgery.


Carla Schaffer/AAAS | Robotic Surgery Just Got More Autonomous


Abstract of Supervised autonomous robotic soft tissue surgery

The current paradigm of robot-assisted surgeries (RASs) depends entirely on an individual surgeon’s manual capability. Autonomous robotic surgery—removing the surgeon’s hands—promises enhanced efficacy, safety, and improved access to optimized surgical techniques. Surgeries involving soft tissue have not been performed autonomously because of technological limitations, including lack of vision systems that can distinguish and track the target tissues in dynamic surgical environments and lack of intelligent algorithms that can execute complex surgical tasks. We demonstrate in vivo supervised autonomous soft tissue surgery in an open surgical setting, enabled by a plenoptic three-dimensional and near-infrared fluorescent (NIRF) imaging system and an autonomous suturing algorithm. Inspired by the best human surgical practices, a computer program generates a plan to complete complex surgical tasks on deformable soft tissue, such as suturing and intestinal anastomosis. We compared metrics of anastomosis—including the consistency of suturing informed by the average suture spacing, the pressure at which the anastomosis leaked, the number of mistakes that required removing the needle from the tissue, completion time, and lumen reduction in intestinal anastomoses—between our supervised autonomous system, manual laparoscopic surgery, and clinically used RAS approaches. Despite dynamic scene changes and tissue movement during surgery, we demonstrate that the outcome of supervised autonomous procedures is superior to surgery performed by expert surgeons and RAS techniques in ex vivo porcine tissues and in living pigs. These results demonstrate the potential for autonomous robots to improve the efficacy, consistency, functional outcome, and accessibility of surgical techniques.

http://www.kurzweilai.net/bee-model-could-be-breakthrough-for-autonomous-drone-development

Bee model could be breakthrough for autonomous drone development

May 6, 2016
[+]

A visualization of the model taken at one time point while running. Each sphere represents a computational unit, with lines representing the connection between units. The colors represent the output of each unit. The left and right of the image are the inputs to the model and the center is the output, which is used to guide the virtual bee down a simulated corridor. (credit: The University of Sheffield)

A computer model of how bees use vision to avoid hitting walls could be a breakthrough in the development of autonomous drones.

Bees control their flight using the speed of motion (optic flow) of the visual world around them. A study by Scientists at the University of Sheffield Department of Computer Science suggests how motion-direction detecting circuits could be wired together to also detect motion-speed, which is crucial for controlling bees’ flight.

“Honeybees are excellent navigators and explorers, using vision extensively in these tasks, despite having a brain of only one million neurons,” said Alex Cope, PhD., lead researcher on the paper. “Understanding how bees avoid walls, and what information they can use to navigate, moves us closer to the development of efficient algorithms for navigation and routing, which would greatly enhance the performance of autonomous flying robotics,” he added.

“Experimental evidence shows that they use an estimate of the speed that patterns move across their compound eyes (angular velocity) to control their behavior and avoid obstacles; however, the brain circuitry used to extract this information is not understood, ” the researchers note. “We have created a model that uses a small number of assumptions to demonstrate a plausible set of circuitry. Since bees only extract an estimate of angular velocity, they show differences from the expected behavior for perfect angular velocity detection, and our model reproduces these differences.”

Their open-access paper is published in PLOS Computational Biology.


Abstract of A Model for an Angular Velocity-Tuned Motion Detector Accounting for Deviations in the Corridor-Centering Response of the Bee

We present a novel neurally based model for estimating angular velocity (AV) in the bee brain, capable of quantitatively reproducing experimental observations of visual odometry and corridor-centering in free-flying honeybees, including previously unaccounted for manipulations of behaviour. The model is fitted using electrophysiological data, and tested using behavioural data. Based on our model we suggest that the AV response can be considered as an evolutionary extension to the optomotor response. The detector is tested behaviourally in silico with the corridor-centering paradigm, where bees navigate down a corridor with gratings (square wave or sinusoidal) on the walls. When combined with an existing flight control algorithm the detector reproduces the invariance of the average flight path to the spatial frequency and contrast of the gratings, including deviations from perfect centering behaviour as found in the real bee’s behaviour. In addition, the summed response of the detector to a unit distance movement along the corridor is constant for a large range of grating spatial frequencies, demonstrating that the detector can be used as a visual odometer.

http://news.ubc.ca/2016/05/06/epigenetics/

Epigenetics

Michael Kobor, associate professor in UBC’s department of medical genetics and a scientist at the Centre for Molecular Medicine and Therapeutics, explains how the environment affects our genes on a segment about epigenetics on Roundhouse Radio.

“Our research program is focused on this idea that’s gained a lot of traction in the last few years that early life environments and even back to pregnancy might affect our health and our behaviours later in life,” said Kobor. “Diverse social environments like poverty and family environments can literally get under your skin and chemically modify your DNA and that leads to changes in its expression.”