http://appleinsider.com/articles/17/10/31/apple-ios-111-now-available-for-iphone-ipad-ipod-touch-with-70-new-emoji-other-improvements

Apple iOS 11.1 now available for iPhone, iPad, iPod touch with 70 new emoji, other improvements

After a month-long testing process, Apple has made available iOS 11.1 including a fix for the KRACK wi-fi exploit, and a large amount of accessibility feature fixes.

In addition to the KRACK wi-fi attack vector rectification, Apple’s iOS 11.1 also includes the return of the 3D Touch multitasking app switcher.

Apple’s notes on the update proclaim that the update “introduces over 70 new emoji.” The new emoji require iOS 11.1 devices for the intended recipient as well, or the graphic in question won’t be properly displayed.

Other fixes include resolution of several issues in Photos, improved Accessibility with many VoiceOver enhancements and improved braille device support, some Apple Watch data handoff issues repairs, and a problem resulting in inaccuracies in location data from external GPS devices has been patched.

Still absent are Apple Pay Cash, and AirPlay 2.

Three “sub-point” updates proceeded the iOS 11.1 release, fixing some glaring bugs —all of which have been rolled into iOS 11.1.

The update is a 311.3 MB download on an iPhone 7 Plus.

http://linuxgizmos.com/rpi-zero-w-clone-offers-quad-core-power-for-15/

Raspberry Pi Zero W clone offers quad-core power for $15

SinoVoip’s Linux-friendly, 60 x 30mm Banana Pi M2 Zero (BPI-M2 Zero) SBC closely mimics the Raspberry Pi Zero W, but has a faster Allwinner H2+.

Just as we were trumpeting the $23 BPI-M2 Magic as being the “smallest, cheapest Banana Pi yet,” SinoVoip has launched an even tinier and more affordable Linux/Android hacker board on AliExpress. The WiFi-enabled Banana Pi M2 Zero (BPI-M2 Zero), which was revealed back in July, is now selling for only $15 with the standard 512MB RAM, or $21.53 including shipping to the U.S.

 
BPI-M2 Zero, front and back
(click images to enlarge)
The BPI-M2 Zero SBC openly mimics the $10 Raspberry Pi Zero W in both dimensions and features, but offers a 1.2GHz, quad-core, Cortex-A7 Allwinner H2+ compared to the RPi Zero W’s single-core, 1GHz ARM11-based Broadcom BCM2836. Like other Banana Pi boards, it is an open source hardware and software design with community support.The Allwinner H2+ is much like the widely adopted Allwinner H3 SoC except that it tops out at HD resolution instead of 4K. This is the same SoC found on two other RPi Zero W clones: FriendlyElec’s $8 (256MB) to $12 (512MB) NanoPi Duo and Shenzhen Xunlong’s $7 to $12.30 (512MB) Orange Pi Zero.


BPI-M2 Zero (bottom) with Raspberry Pi Zero W (top)
(click image to enlarge)
The BPI-M2 Zero measures 60 x 30m (1,800 square mm) compared to 65 x 30mm (1,950 sq. millimeters) for the RPi Zero W. Since the layout is almost identical, you can use a Zero W case.By comparison, the NanoPi Duo measures 50 x 25.4mm (1,270 sq. mm) and the Orange Pi Zero is 48 x 46mm (2,208 sq. mm). The recently released BPI-M2 Magic has a footprint of 51 x 51mm (2,500 sq. mm). The BPI-M2 Zero’s 35-gram weight, however, is considerably greater than its rivals.

The BPI-M2 Zero is a much closer clone of the Raspbery Pi Zero W than the other imitators. It has a standard allotment of 512MB RAM rather than a 256MB entry point like the NanoPi Duo and Orange Pi Zero, and unlike those two boards, it offers Bluetooth in addition to WiFi.


BPI-M2 Zero detail view
(click image to enlarge)
The SBC is further equipped with a mini-HDMI port, microSD slot, micro-USB OTG port, and 5V power-only micro-USB port. The BPI-Zero lacks the composite video header of the Zero W, but similarly provides a CSI camera connector. Also like the Zero W, it has a 40-pin GPIO header compared to the 32- and 24-pin headers for the NanoPi Duo and Orange Pi Zero, respectively.Specifications are listed for the BPI-M2 Zero include:

  • Processor — Allwinner H2+ (4x Cortex-A7 @ 1.2GHz); ARM Mali-400 MP2 GPU @600MHz
  • Memory — 512MB DDR3 SDRAM
  • Storage — MicroSD slot for up to 64GB
  • Wireless:
    • 802.11b/g/n plus Bluetooth 4.0 dual mode (Ampak AP6212)
    • Optional dual mode Broadcom AP6335 with 802.11ac and BT 4.0 or Gigafu Tech AP6181 (2.4GHz WiFi and no BT, but low power consumption)
    • RF connector
  • Display — Mini-HDMI port with audio for up to 1080p60
  • Other I/O:
    • Micro-USB 2.0 OTG port (with power support)
    • MIPI-CSI for 5MP cam or 1080p @30 video input
    • Debug UART/ground header with 3x GPIO
    • 40-pin RPi 3-compatible expansion connector
  • Other features — 2x LEDs; power and reset buttons
  • Power — 5V/2A via micro-USB
  • Dimensions — 60 x 30mm
  • Weight — 35 g
  • Operating system — Android, Debian, Ubuntu, Raspbian image

Further information

The Banana Pi M2 Zero (BPI-M2 Zero) is available for $15, or $21.53 including shipping to the U.S. More information may be found on the BPI-M2 Zero AliExpress shopping page and Banana Pi wiki.

http://www.imeche.org/news/news-article/spider-silk-could-improve-microphones-and-hearing-aids1

Spider silk could improve microphones and hearing aids

Amit Katwala

(Credit: iStock)
(Credit: iStock)

Fine fibres such as spider silk could lead to new and better microphones that sense airflow fluctuations rather than pressure changes.

Some insects, including mosquitos, flies and spiders, sense sound using fine hairs on their bodies that move with the sound waves travelling through the air. “We use our eardrums which pick up the direction of sound based on pressure, but most insects actually hear with their hairs,” explained Ron Miles, a professor at Binghamton University in New York.

Working alongside graduate student Jian Zhou, Miles recreated a similar system inside a microphone, which had better directional sensing across a wider range of frequencies than traditional models. It could give hearing aid or smartphone users the ability to cancel out background noise more effectively when having a conversation in a crowed area.

To create their microphone, Miles and Zhou used spider silk, which is thin enough that it moves with the air when hit by soundwaves. “This can even happen with infrasound at frequencies as low as 3 Hertz,” said Miles – that’s the equivalent of hearing the normally inaudible rumble of tectonic plates moving in an earthquake.

To translate the movement of the spider silk into an electronic signal, the researchers coated it with gold and placed it in a magnetic field. “It’s actually a fairly simple way to make an extremely effective microphone that microphone that has better directional capabilities across a wide range of frequencies,” said Miles.

Rob Malkin, an expert in bio-inspired acoustic devices from the University of Bristol, told Professional Engineering that the research demonstrated yet again “how a beautiful design from the insect world can lead to advancements in microphone engineering”.

He called the work a “step change” in how microphones could function in the future, as it expanded the narrow range of frequencies that insects can hear at, into a spectrum broad enough for humans. “The work is very encouraging as it shows that the physical process being exploited by many insects – that is hearing with hairs – is relatively simple, and should make the manufacture of broadband devices finally possible,” Malkin added.

http://www.kurzweilai.net/a-tool-to-debug-black-box-deep-learning-neural-networks

A tool to debug ‘black box’ deep-learning neural networks

Brings transparency to self-driving cars and other self-taught systems
October 30, 2017

Oops! A new debugging tool called DeepXplore generates real-world test images meant to expose logic errors in deep neural networks. The darkened photo at right tricked one set of neurons into telling the car to turn into the guardrail. After catching the mistake, the tool retrains the network to fix the bug. (credit: Columbia Engineering)

Researchers at Columbia and Lehigh universities have developed a method for error-checking the reasoning of the thousands or millions of neurons in unsupervised (self-taught) deep-learning neural networks, such as those used in self-driving cars.

Their tool, DeepXplore, feeds confusing, real-world inputs into the network to expose rare instances of flawed reasoning, such as the incident last year when Tesla’s autonomous car collided with a truck it mistook for a cloud, killing its passenger. Deep learning systems don’t explain how they make their decisions, which makes them hard to trust.

Modeled after the human brain, deep learning uses layers of artificial neurons that process and consolidate information. This results in a set of rules to solve complex problems, from recognizing friends’ faces online to translating email written in Chinese. The technology has achieved impressive feats of intelligence, but as more tasks become automated this way, concerns about safety, security, and ethics are growing.

Finding bugs by generating test images

Debugging the neural networks in self-driving cars is an especially slow and tedious process, with no way to measure how thoroughly logic within the network has been checked for errors. Current limited approaches include randomly feeding manually generated test images into the network until one triggers a wrong decision (telling the car to veer into the guardrail, for example); and “adversarial testing,” which automatically generates test images that it alters incrementally until one image tricks the system.

The new DeepXplore solution — presented Oct. 29, 2017 in an open-access paper at ACM’s Symposium on OperatingSystems Principles in Shanghai — can find a wider variety of bugs than random or adversarial testing by using the network itself to generate test images likely to cause neuron clusters to make conflicting decisions, according to the researchers.

To simulate real-world conditions, photos are lightened and darkened, and made to mimic the effect of dust on a camera lens, or a person or object blocking the camera’s view. A photo of the road may be darkened just enough, for example, to cause one set of neurons to tell the car to turn left, and two other sets of neurons to tell it to go right.

After inferring that the first set misclassified the photo, DeepXplore automatically retrains the network to recognize the darker image and fix the bug. Using optimization techniques, researchers have designed DeepXplore to trigger as many conflicting decisions with its test images as it can while maximizing the number of neurons activated.

“You can think of our testing process as reverse-engineering the learning process to understand its logic,” said co-developer Suman Jana, a computer scientist at Columbia Engineering and a member of the Data Science Institute. “This gives you some visibility into what the system is doing and where it’s going wrong.”

Testing their software on 15 state-of-the-art neural networks, including Nvidia’s Dave 2 network for self-driving cars, the researchers uncovered thousands of bugs missed by previous techniques. They report activating up to 100 percent of network neurons — 30 percent more on average than either random or adversarial testing — and bringing overall accuracy up to 99 percent in some networks, a 3 percent improvement on average.*

The ultimate goal: certifying a neural network is bug-free

Still, a high level of assurance is needed before regulators and the public are ready to embrace robot cars and other safety-critical technology like autonomous air-traffic control systems. One limitation of DeepXplore is that it can’t certify that a neural network is bug-free. That requires isolating and testing the exact rules the network has learned.

A new tool developed at Stanford University, called ReluPlex, uses the power of mathematical proofs to do this for small networks. Costly in computing time, but offering strong guarantees, this small-scale verification technique complements DeepXplore’s full-scale testing approach, said ReluPlex co-developer Clark Barrett, a computer scientist at Stanford.

“Testing techniques use efficient and clever heuristics to find problems in a system, and it seems that the techniques in this paper are particularly good,” he said. “However, a testing technique can never guarantee that all the bugs have been found, or similarly, if it can’t find any bugs, that there are, in fact, no bugs.”

DeepXplore has applications beyond self-driving cars. It can find malware disguised as benign code in anti-virus software, and uncover discriminatory assumptions baked into predictive policing and criminal sentencing software, for example.

The team has made their open-source software public for other researchers to use, and launched a website to let people upload their own data to see how the testing process works.

* The team evaluated DeepXplore on real-world datasets including Udacity self-driving car challenge data, image data from ImageNet and MNIST, Android malware data from Drebin, PDF malware data from Contagio/VirusTotal, and production-quality deep neural networks trained on these datasets, such as these ranked top in Udacity self-driving car challenge. Their results show that DeepXplore found thousands of incorrect corner case behaviors (e.g., self-driving cars crashing into guard rails) in 15 state-of-the-art deep learning models with a total of 132,057 neurons trained on five popular datasets containing around 162 GB of data.


Abstract of DeepXplore: Automated Whitebox Testing of Deep Learning Systems

Deep learning (DL) systems are increasingly deployed in safety- and security-critical domains including self-driving cars and malware detection, where the correctness and predictability of a system’s behavior for corner case inputs are of great importance. Existing DL testing depends heavily on manually labeled data and therefore often fails to expose erroneous behaviors for rare inputs.

We design, implement, and evaluate DeepXplore, the first whitebox framework for systematically testing real-world DL systems. First, we introduce neuron coverage for systematically measuring the parts of a DL system exercised by test inputs. Next, we leverage multiple DL systems with similar functionality as cross-referencing oracles to avoid manual checking. Finally, we demonstrate how finding inputs for DL systems that both trigger many differential behaviors and achieve high neuron coverage can be represented as a joint optimization problem and solved efficiently using gradient-based search techniques.

DeepXplore efficiently finds thousands of incorrect corner case behaviors (e.g., self-driving cars crashing into guard rails and malware masquerading as benign software) in state-of-the-art DL models with thousands of neurons trained on five popular datasets including ImageNet and Udacity self-driving challenge data. For all tested DL models, on average, DeepXplore generated one test input demonstrating incorrect behavior within one second while running only on a commodity laptop. We further show that the test inputs generated by DeepXplore can also be used to retrain the corresponding DL model to improve the model’s accuracy by up to 3%.

http://uk.businessinsider.com/apple-homepod-spotify-2017-10

Spotify won’t work with Siri on Apple’s HomePod

Apple HomePod white and blackAP

  • Apple’s smart speaker, HomePod, will go on sale in December for $349. People will use it by talking to it. 
  • Apple wants developers to make Siri apps for the HomePod, but only in a few categories — messaging, lists, and notes.
  • This means that users won’t be able to tell the HomePod to play music from Spotify, although Apple’s streaming service, Apple Music, is supported. 

 

Apple encouraged developers on Monday to make apps for HomePod, Apple’s new smart speaker that will go on sale before the end of the year.

Because the HomePod does not have a screen, users are expected to interact with it primarily by talking to it and to Apple’s voice assistant, Siri.

But the third-party apps that Apple is opening the doors for on the HomePod are fairly limited — Apple says that only apps that revolve around messaging, lists, and notes can integrate with Siri on the HomePod and will use a nearby iPhone or iPad to process commands.

The lack of music app support means that users won’t be able to play Spotify on the HomePod the way it was intended — by using your voice to tell it to play a song. (Users also won’t be able to order an Uber or Lyft by speaking to the HomePod, or make a call on Skype, based on the limited app categories.)

Apple Music users, of course, can call up their favorite tracks or albums by speaking to the smart speaker.

“We are always working to have Spotify available across all platforms, but we don’t have any further information to share at this time,” a Spotify representative told Business Insider, pointing out that users will be able to play Spotify on the HomePod speaker by using the AirPlay feature from an iPhone or iPad. Users will control the music by tapping around the Spotify smartphone app, rather than by using verbal commands.

“Third party apps like Spotify would just play music on HomePod using AirPlay 2, as we said back in June,” an Apple representative told Business Insider. AirPlay enables HomePod to be used like other wireless speakers.

When Apple first opened up Siri to third-party developers it took a similar approach. At first, Apple limited Siri access to apps only in a few categories, such as ride-hailing and fitness, although it expanded the number of categories earlier this summer.

However, Siri still does not support music apps on the iPhone and Spotify has no Siri features.

http://wccftech.com/best-premium-iphone-ipad-apps-gone-free-today-download-them-all/

Best Premium iPhone & iPad Apps Gone FREE Today – Download them All

These are the best premium iPhone and iPad apps that have gone free for a limited period of time. Grab them from here before they return to their original prices.

Paid iPhone & iPad Apps Have Gone FREE. Download them All Today.

Today is a great day if you are asking us. Though it’s a Monday, but some great premium iPhone and iPad apps have gone absolutely free for everyone to download. So, the deal is simple, just pick up your phone or tablet and start downloading these wonderful offers before they return to their original prices.

http://www.itpro.co.uk/desktop-hardware/27763/raspberry-pi-4-google-announces-partnership-with-raspberry-pi-foundation-2

Raspberry Pi 4: Google announces partnership with Raspberry Pi Foundation

Raspberry Pi fans can now take advantage of Google’s AI technology

The last five years have seen a lot of advancements in technology, but few products have been as successful as the humble Raspberry Pi.

Created by a team of ex-Cambridge University staff headed by Eben Upton, the credit card-sized $35 microcomputer was originally intended to allow kids to get into computing cheaply and easily, but has inadvertently inspired a groundswell movement of makers, hackers and hobby coders, who have built a fanatical following around the little device.

The Raspberry Pi is at the centre of hacked-together DIY smarthome projects, enterprise IoT deployments and even space missions. The range now includes devices powerful enough to act as basic desktop machines, as well as the even more bite-sized Raspberry Pi Zero. With every successive iteration getting more powerful and more versatile, the Pi community is eagerly awaiting the next generation of the little computer. Here’s everything we know so far.

Latest Raspberry Pi 4 News

04/05/2017: The Raspberry Pi can now harness the power of Google’s AI, thanks to a new add-on board.

Produced as part of a new collaboration between Google and the Raspberry Pi Foundation, the Voice HAT (Hardware Accessory on Top) board allows users to voice control to their projects.

Budding makers will also have access to some of Google’s most powerful tools, include the Google Assistant SDK, which provides the brain behind Google’s AI helper, and the Google Cloud Speech APi which the company uses internally for speech recognition tasks.

The new Voice HAT boards are being given away with every copy of issue 57 of the MagPi, along with everything you need to get started with your first voice-controlled projects. Fans will have to be quick, though – this issue is expected to be very popular indeed.

Raspberry Pi 4 release date and availability

Fans will be disappointed to hear that the next iteration of the Raspberry Pi won’t be arriving until at least 2019. Eben Upton, co-creator and co-founder of the Raspberry Pi Foundation, has previously stated the current Raspberry Pi 3 would have a minimum life span of three years.

There is a possibility that the new Raspberry Pi 4 could be delayed beyond this, as the Foundation has effectively hit the limit of what can be achieved using the 40nm manufacturing process. Upton hasn’t given up, however, stating: “we’ll get there eventually”.

Whenever it does arrive, fans will need act quickly – with each new Raspberry Pi launch the products have faced massive demand and have quickly sold off their initial production run, and the Raspberry Pi 4 is almost guaranteed to do the same. Prospective buyers should also expect to receive a cap on the number of devices you are able to order at once, a measure to thwart potential scalpers.

Raspberry Pi 4 specs and features

Given the problems facing development, there’s still no word on the technical specifications likely to feature in the Pi 4. Given that the company is struggling to innovate with the current 40nm, it is likely we’ll see a switch to an alternate manufacturing process, offering more efficient silicon.

As for features, the Raspberry Pi 3 already includes Bluetooth and Wi-Fi support, so we’re unlikely to see any substantial networking upgrades. You can also expect the form factor to stay the same, given the team’s focus on interoperability between Pi generations.

Ports are an area where we may see some real change. For example, Thunderbolt 3-compatible USB Type-C ports can handle power, data, and video transfer – meaning that one USB-C port could do the job of every existing input found on the Pi 3. They’re also substantially smaller than full-size USB Type-A ports, which would allow the Pi 4 to have a much slimmer profile.

Raspberry Pi 4 pricing

One of the main reasons why the original Raspberry Pi was so cheap was that it was intended to be affordable for parents whose children wanted to get into computing. This mission statement is still core to everything that the Raspberry Pi Foundation does, so we wouldn’t expect any substantial price increases with the Raspberry Pi 4. Creator Eben Upton has been very firm about wanting to keep the price as close to $35 as possible.

Having said that, you’ll likely want to shell out a little extra when the Pi 4 eventually does arrive, as it will come with little more than a the machine itself. Any accessories such as keyboards, displays, casing or cables will have to be purchased separately.

Luckily, resellers like Pimoroni, RS Components and The Pi Hut will almost certainly be selling ready-made starter kits with everything you’ll need to get going. Be warned, however; it’s incredibly likely, based on demand for the last few generations of Pi, that demand will far outstrip supply. This will probably lead to shortages at launch, and may mean that orders are limited to one per customer in order to deter scalpers.

Previous news

10/03/2017: Raspberry Pi creator Eben Upton has told IT Pro that the development of future devices like the Raspberry Pi 4 are likely to be made more difficult simply by the fact that hardware is reaching its technical limitations.

“We’re kind of at the end of the road for 40 nanometer,” he said in an interview. “There’s not much more you can do in that process, because ultimately you’re limited by thermals. In the end, you can add as much silicon area as you want, because if you can’t afford to toggle the transistors in the silicon because the thing will cook, then you can’t get any faster.”

However, Upton made it clear that this does not mean that the Raspberry Pi Foundation will be giving up on making hardware, saying: “I’d love to do more tinkering.”

26/01/2017: The Raspberry Pi Foundation has announced Google will be helping integrate AI tools into Raspberry Pi, presumably coming out with the launch of Raspberry Pi 4.

The company is inviting developers to work with Google to introduce such features to the Raspberry Pi ecosystem and so has invited the Raspberry Pi community to give feedback via a survey.

“Hi, makers! Thank you for taking the time to take our survey,” Google wrote in its announcement. “We at Google are interested in creating smart tools for makers, and want to hear from you about what would be most helpful.  As a thank you, we will share our findings with the community so that you can learn more about makers around the world.”

Whether developers would like to see facial or emotional recognition, speech-to-text translation or natural language processing, it looks likely Google is eager to help developers integrate such features into their Raspberry Pi innovations.

“The survey will help [Google] get a feel for the Raspberry Pi community, but it’ll also help us get the kinds of services we need,” the Raspberry Pi Foundation said.

It is encouraging everyone active in the community to fill out Google’s survey and help users get the tools they need.

https://www.theguardian.com/lifeandstyle/2017/oct/30/sad-winter-depression-seasonal-affective-disorder

The science of Sad: understanding the causes of ‘winter depression’

The darker days and colder weather can bring with them a feeling of low spirits. So, what makes people susceptible to seasonal affective disorder, and what are the best ways to treat it?

Putting the clocks back for daylight saving time can be accompanied by a distinct feeling of winter blues.
 Around 80% of Sad sufferers are women, particularly those in early adulthood. Photograph: Alamy/Guardian Design

For many of us in the UK, the annual ritual of putting the clocks back for daylight saving time can be accompanied by a distinct feeling of winter blues as autumn well and truly beds in. This might be felt as a lack of energy, reduced enjoyment in activities and a need for more sleep than normal. But for around 6% of the UK population and between 2-8% of people in other higher latitude countries such as Canada, Denmark and Sweden, these symptoms are so severe that these people are unable to work or function normally. They suffer from a particular form of major depression, triggered by changes in the seasons, called seasonal affective disorder or Sad.

In addition to depressive episodes, Sad is characterised by various symptoms including chronic oversleeping and extreme carbohydrate cravings that lead to weight gain. As this is the opposite to major depressive disorder where patients suffer from disrupted sleep and loss of appetite, Sad has sometimes been mistakenly thought of as a “lighter” version of depression, but in reality it is simply a different version of the same illness. “People who truly have Sad are just as ill as people with major depressive disorder,” says Brenda McMahon, a psychiatry researcher at the University of Copenhagen. “They will have non-seasonal depressive episodes, but the seasonal trigger is the most common. However it’s important to remember that this condition is a spectrum and there are a lot more people who have what we call sub-syndromal Sad.”

Around 10-15% of the population has sub-syndromal Sad. These individuals struggle through autumn and winter and suffer from many of the same symptoms but they do not have clinical depression. And in the northern hemisphere, as many as one in three of us may suffer from “winter blues” where we feel flat or disinterested in things and regularly fatigued.

Putting the clocks back for daylight saving time can be accompanied by a distinct feeling of winter blues.
Pinterest
 Putting the clocks back for daylight saving time can be accompanied by a distinct feeling of winter blues. Photograph: Alamy

One theory for why this condition exists is related to evolution. Around 80% of Sad sufferers are women, particularly those in early adulthood. In older women, the prevalence of Sad goes down and some researchers believe that this pattern is linked to the behavioural cycles of our ancient ancestors. “Because it affects such a large proportion of the population in a mild to moderate form, a lot of people in the field do feel that Sad is a remnant from our past, relating to energy conservation,” says Robert Levitan, a professor at the University of Toronto. “Ten thousand years ago, during the ice age, this biological tendency to slow down during the wintertime was useful, especially for women of reproductive age because pregnancy is very energy-intensive. But now we have a 24-hour society, we’re expected to be active all the time and it’s a nuisance. However, as to why a small proportion of people experience it so severely that it’s completely disabling, we don’t know.”

There are a variety of biological systems thought to be involved, including some of the major neurotransmitter systems in the brain that are associated with motivation, energy and the organisation of our 24-hour circadian rhythms. “We know that dopamine and norepinephrine play critical roles in terms of how we wake up in the morning and how we energise the brain,” Levitan says. One particular hormone, melatonin, which controls our sleep and wake cycles, is thought to be “phase delayed” in people with severe Sad, meaning it is secreted at the wrong times of the day.

Another system of particular interest relates to serotonin, a neurotransmitter that regulates anxiety, happiness and mood. Increasing evidence from various imaging and rodent studies suggests that the serotonin system may be directly modulated by light. Natural sunlight comes in a variety of wavelengths, and it is particularly rich in light at the blue end of the spectrum. When cells in the retina, at the back of our eye, are hit by this blue light, they transmit a signal to a little hub in the brain called the suprachiasmatic nucleus that integrates different sensory inputs, controls our circadian rhythms, and is connected to another hub called the raphe nuclei in the brain stem, which is the origin of all serotonin neurons throughout the brain. When there is less light in the wintertime, this network is not activated enough. In especially susceptible individuals, levels of serotonin in the brain are reduced to such an extent that it increases the likelihood of a depressive episode.

The most popular treatments for Sad is bright-light therapy.
Pinterest
 The most popular treatments for Sad is bright-light therapy. Photograph: PR Company Handout

Serotonin may also explain why women are so much more vulnerable to Sad than men. “There’s a close connection between estradiol, the main female sex hormone, and the serotonin transporter,” McMahon says. “We have a good idea that the fluctuating levels of estradiol during certain phases such as puberty or postpartum, can affect the way serotonin is produced.”

Some populations appear to be particularly resilient to Sad, mostly notable in Iceland. “Icelanders seem to be genetically protected from Sad,” Levitan says. “When they move to locations which have high rates of Sad such as Canada, their rates are much lower than their peers.” One possible reason for this could again be linked to serotonin. There are different variants of a gene that controls how the serotonin transporter behaves, and one particular variant found in resilient individuals actually causes our bodies to increase the production of serotonin during the wintertime.

For those on the Sad spectrum, there are a variety of treatments available, the most popular being bright-light therapy, an artificial means of stimulating the brain’s neurotransmitters. “It’s very important to use a Sad-specific ultraviolet filtered light otherwise it can be dangerous,” says Levitan. “But it can really enable people with Sad to get their day started earlier and avoid oversleeping, which can be very depressogenic. It’s probably effective in providing symptom relief in around 80% of patients, particularly those with the carbohydrate craving, oversleeping symptoms. For the most severe patients, though, it may have to be combined with antidepressant therapy.”

However, psychiatrists urge patients to steer clear of some of the many alternative therapies on the market. “There’s a range of new technologies people are developing, such as an earplug that is supposed to radiate light into your brain, but there’s no science to prove that this actually works,” McMahon says. “However, there are some good additions to conventional light therapy and antidepressants, such as tryptophan, an amino acid that gets converted to serotonin in our bodies, which can be given as an add-on treatment.”

http://www.independent.co.uk/life-style/gadgets-and-tech/news/google-activity-recognition-android-permissions-shazam-soundhound-apps-privacy-a8028136.html

Your handset can tell if you’re standing up, or if you’ve just lifted your phone off a desk

Your phone can reveal all of your physical activities to Google and the apps you use.

The sensors inside it can monitor, understand and disclose your real-world movements, based on what’s happening to the phone itself.

It can tell, for instance, if you’re standing up, or if you’ve just lifted your phone off a desk, or if you’ve started walking.

An Android permission called “Activity Recognition”, which was discussed on Reddit and highlighted by DuckDuckGo last week, makes it much easier for developers to work out what you’re doing at any one time.

Shazam and SoundHound request the permission, but it isn’t completely clear why.

Though Activity Recognition isn’t new, the reaction to the Reddit and DuckDuckGo posts suggests a lot of users are unaware of it.

“The Activity Recognition API is built on top of the sensors available in a device,” says Google.

“Device sensors provide insights into what users are currently doing. However, with dozens of signals from multiple sensors and slight variations in how people do things, detecting what users are doing is not easy.

“The Activity Recognition API automatically detects activities by periodically reading short bursts of sensor data and processing them using machine learning models.”

 Activity Recognition can tell developers when your phone is: in a vehicle, such as a car; on a bicycle; not moving; being tilted, due to its angle “relative to gravity” changing; on a user who’s walking or on a user who’s running.

It can even tell when you’re doing more than one thing at once, such as walking while being on a bus.

The API automatically gives its findings a likelihood rating out of 100. The higher the number, the more confident it is that you’re actually doing what it believes you’re doing.

This information is fed to the apps you’ve granted the Activity Recognition permission to.

“A common use case is that an application wants to monitor activities in the background and perform an action when a specific activity is detected,” says Google.

For instance, an app can automatically start monitoring your heartbeat when you start running, or switch to car mode when you start driving.

Though it can prove useful, it also sounds somewhat creepy.

The fact that Google categorises buries it in the “Other” category of permissions and doesn’t let you deny or disable it doesn’t help matters.

What’s more, the company has made it difficult to find out which apps ask for the permission.

Right now, the only way to find out is by checking out each of your apps’ permissions one-by-one, by going to Settings, Apps, tapping an app, hitting the menu button and selecting All Permissions. It’s a slow and laborious process.

If you’re particularly concerned about Activity Recognition, it’s worth going through the effort and uninstalling any of you apps that request the permission, for peace of mind.