http://gizmodo.com/darpa-wants-to-hack-your-brain-to-make-you-learn-faster-1794667766

Image: Warner Brothers

If the brain is just a bunch of wires and circuits, it stands to reason that those components can simply be re-wired in order to create a better, smarter us. At least, that’s the theory behind a new project from the military’s secretive DARPA research branch announced on Wednesday, which aims to enhance human cognitive ability by activating what’s known as “synaptic plasticity.”

Recent research has suggested that stimulating certain peripheral nerves—those that relay signals between the brain, the spinal cord and the rest of the body—can enhance a person’s ability to learn, by triggering the release of neurochemicals that reorganize connections in the brain. Through its new Targeted Neuroplasticity Training program, DARPA is is funding eight different research efforts that seek to enhance learning by targeting those nerves with electrical stimulation. The end goal is to translate those findings into real-world applications that boost military training regiments—allowing a soldier, to say, soak up a new language in months instead of years. Should DARPA figure out a way to do that, its efforts will likely go on to impact all of us.

“TNT aims to deliver new knowledge of the neural processes that regulate cognitive functions associated with learning,” Doug Weber, the program’s manager, told Gizmodo. In other words, DARPA wants to study the basic biology at work here, and eventually, design neurostimulation devices that exploit our biological wiring to enhance learning.

One DARPA-funded team, at Johns Hopkins University, will focus on speech and hearing. These researchers will be experimenting with vagal nerve stimulation, exploring whether this can accelerate learning a new language. Another team at the University of Florida will study how vagal nerve stimulation impacts perception, executive function, decision-making, and spatial navigation in rodents. Yet another at Arizona State University will stimulate the trigeminal nerve, and study how that impacts visual, sensory and motor functions of military volunteers studying intelligence, surveillance, reconnaissance, marksmanship and decision-making.

Already, there are plenty of products on the market that claim to offer cognitive, psychological, and physical performance enhancement. (Basketball’s Golden State Warrior’s, for one, are known to rely on brain-zapping for a purported edge in their game.) But there is little understanding of how these devices work—and many scientists suspect they don’t. The aim of the DARPA program is to settle this debate, testing the efficacy of both implanted and non-invasive devices to understand not only whether they actually work, but if so, how.

“We are starting with a bit of knowledge about how the peripheral nerves are wired, but relatively little knowledge about the effects of neurostimulation on their function,” Weber said.

If, it turns out, there is a sufficient link between neurostimulation and improvements in learning, the second phase of the program will work to design devices that enhance training in foreign language learning, image analysis, and spatial navigation tasks.

“Most computer analogies for the brain idea are bad,” said Michael Kilgard, the lead researcher on the University of Texas’ project. “But there really are wires from point A to point B. When you cut those wires you lose function. But after they’re cut they can make new connections. We have technologies now that allow us to see those connections.”

Kilgard’s work has, until recently, focused on repairing damaged circuits. Areas of research like deep brain stimulation (which involves implanting a chip deep in the brain) and transcranial direct stimulation (which changes brain chemistry using non-invasive electrical stimulation) have seen some success in using electricity to correct faulty wiring to, say, help treat mental health conditions. Kilgard has had success using targeted plasticity therapy to treat PTSD.

“Our idea was, after brain injury how do you get better? What you really need is to rewire circuits,” he said. “This is the next logical step. If you can help recover function you’ve lost, can you increase the rate at which you learn new things?”

Eventually, he envisions a device that, for a few hundred bucks, will non-invasively allow anyone to pick up a language at an accelerated pace. Under the current grant, he hopes to in five years have a (likely much more expensive) version of that device ready for FDA approval.

But there are plenty of hurdles. For one, that any of this will even work is still little more than an educated guess.

“We are leveraging state-of-art tools for probing the molecular and cellular processes underlying these functions, but even the most advanced instrumentation is limited,” said Weber.

What’s more, the very premise of the research is likely to stoke fears that DARPA is creating a race of cognitively enhanced super soldiers. The agency has several other brain projects in the works, which seek to use implanted chips to treat mental illness as well as to restore memories and movement to battle-wounded soldiers. For now, the aim of the program is to just give our brain’s a little boost—allowing us to learn a new skill maybe, say, 30% faster than we would naturally. But even the use of brain stimulants like Ritalin or Modafinil readily available on today’s college campuses is controversial.

Critics argue that such enhancement defies human nature. Supporters say that seeking out enhancement is human nature. The new research makes this debate, on how far we as a society are willing to take human cognitive enhancement, all the more urgent.

“Issues related to safety, equal access to the technology, and freedom of choice are often the earliest topics considered when new, breakthrough technologies are created,” Weber said. “It’s important that we carefully consider the broader impact of this work.”

http://mobilesyrup.com/2017/04/26/samsung-weemogee/

Samsung’s new Wemogee app aims to help people living with aphasia

http://www.tomshardware.com/news/ibm-nvidia-gpu-supercomputing-record,34239.html

IBM, Stone Ridge Technology, Nvidia Break Supercomputing Record

It’s no secret that GPUs are inherently better than CPUs for complex parallel workloads. IBM’s latest collaborative effort with Stone Ridge Technology and Nvidia shined a light on the efficiency and performance gains for reservoir simulations used in oil and gas exploration. The oil and gas exploration industry operates on the cutting edge of computing due to the massive data sets and complex nature of simulations, so it is fairly common for companies to conduct technology demonstrations using the taxing workloads.

The effort began with 30 IBM Power S822LC for HPC (Minsky) servers outfitted with 60 IBM POWER8 processors (two per server) and 120 Nvidia Tesla P100 GPUs (four per server). The servers employed Nvidia’s NVLink technology for both CPU-to-GPU and peer-to-peer GPU communication and utilized Infiniband EDR networking.

The companies conducted a 100-billion-cell engineering simulation on GPUs using Stone Ridge Technology’s ultra-scalable ECHELON petroleum reservoir simulator. The simulation modeled 45 years of oil production in a mere 92 minutes, easily breaking the previous record of 20 hours. The time savings are impressive, but they pale in comparison to the hardware savings.

ExxonMobil set the previous 100-billion-cell record in January 2017. ExxonMobil’s effort was quite impressive–the company employed 716,800 processor cores spread among 22,400 nodes on NCSA’s Blue Waters Super Computer (Cray XE6). That setup requires half a football field of floor space, whereas the IBM-powered systems fit within two racks and occupy roughly half of a ping-pong table. The GPU-powered servers also required only one-tenth the power of the Cray machine.

The entire IBM cluster weighs in at roughly $1.25 to $2 million depending upon the memory, networking and storage configuration, whereas the Exxon system would cost in the hundreds of millions of dollars. As such, IBM claims it offers faster performance in this particular simulation at 1/100th the cost.

Such massive simulations are rarely used in the field, but it does highlight the performance advantages of using GPUs instead of CPUs for this class of simulation. Memory bandwidth is a limiting factor in many simulations, so the P100’s memory throughput is a key advantage over the Xeon processors used in the ExxonMobil tests. From Stone Ridge Technology’s blog post outlining its achievement:

“On a chip to chip comparison between the state of the art NVIDIA P100 and the state of the art Intel Xeon, the P100 deliver 9 times more memory bandwidth. Not only that, but each IBM Minsky node includes 4 P100’s to deliver a whopping 2.88 TB/s of bandwidth that can address models up to 32 million cells. By comparison two Xeon’s in a standard server node offer about 160GB/s (See Figure 3). To just match the memory bandwidth of a single IBM Minsky GPU node one would need 18 standard Intel CPU nodes. The two Xeon chips in each node would likely have at least 10 cores each and thus the system would have about 360 cores.”

Xeons are definitely at a memory throughput disadvantage, but it would be interesting to see how a Knight’s Landing-equipped (KNL) cluster would stack up. With up to 500 GBps of throughput from on-package Micron HBM, they easily beat Xeon’s memory throughput. Intel also claims KNL offers up to 8x the performance-per-Watt compared to Nvidia’s GPUs, and though it’s noteworthy that comparison is against previous-generation Nvidia products, that’s a big gap that likely wasn’t closed in a single generation.

IBM feels that its Power Systems paired with Nvidia GPUs can help other fields, such as computational fluid dynamics, structural mechanics, and climate modeling, among others, to reduce the amount of hardware and cost required for complex simulations. The massive nature of this simulation is hardly realistic for most oil and gas companies, but Stone Ridge Technology has also conducted 32-million-cell simulations on a single Minsky node, which might bring an impressive mix of cost and performance to bear for smaller operators.

https://techcrunch.com/2017/04/26/heres-hoping-this-google-pixel-low-light-photography-experiment-gets-real/

Here’s hoping this Google Pixel low-light photography experiment gets real

Google’s Pixel already boasts a formidable camera, among the best in smartphone photography. A Google Daydream engineer Florian Kainz revealed how much more incredible it could be specifically in the area of night photography, when he set himself the task of reproducing DLSR-style nighttime picture quality using both a Pixel and a Nexus 6P.

The experiment, detailed on the Google Research blog, involved pushing the Android smartphone cameras to their limits, making use of their maximum exposure time, 64-frame burst shooting, mixing exposed shots of the target scene with pure black frames shot with the camera’s lens taped up completely, and a whole lot of desktop post-processing work.

The relatively easy part, at least in terms of repeat use, was coding a simple manual photography app for Android that let him set the exposure length, focal distance and ISO required to get the raw images he needed to process using desktop imaging software. The most difficult part might’ve been adjusting the alignment of the various stacked captures to compensate for stellar motion, which is the motion of stars across the night sky, and why you see star trails in unedited long exposure nighttime photography.

The resulting images Kainz produced, from both a Nexus 6P and the newer Pixel, have next to no noise, incredibly night sky tone, clear and sharp star points and foreground details and more. Even using only starlight, the process can produce print-worthy results.

So why isn’t this ready to go for general consumer use? Well, because if you’re not already deeply in love with painstaking desktop image processing, it’s going to be well beyond the average user’s patience or skill level. But Kainz expresses hope at the end of the Research blog post that software “should be able to process the images internally” on a phone, once developed, and that the only ingredient required thereafter from a user’s perspective would be setting up their smartphone on a tripod – a relatively low bar these days, even for casual mobile photographers.

Seems like a lock for something Android engineers could focus on to make the next Pixel or Android release more enticing to users, if you ask me.

https://9to5mac.com/2017/04/26/apple-watch-nikelab-hands-on-release/

Apple Watch NikeLab featured in hands-on ahead of tomorrow’s release

Last week Nike unveiled its latest collaboration with Apple: Apple Watch NikeLab. Now the limited edition Apple Watch Series 2 has been featured in a Highsnobiety hands-on shoot ahead of its upcoming release.Just like Apple Watch Nike+, Apple Watch NikeLab is a co-branded Apple Watch Series 2 that includes the perforated Nike Sport band, special Nike watch faces, and has the Nike+ Run Club app pre-installed.

Apple Watch NikeLab stands out from the Apple Watch Nike+ collection because of its neutral-tone colored Nike Sport band paired with the space gray Apple Watch hardware. The limited edition NikeLab Sport band includes a custom engraving on the inner pin as well.

Apple Watch NikeLab officially launches on April 27 through NikeLab stores, Nike.com, and the Isetan Shinjuku Apple Watch Store in Japan (which will open in about 8 hours as of this writing). Highsnobiety appears to have received a model ahead of time to create these high quality hands-on photos:

Previously, the only hands-on photos we’ve seen have been Apple/Nike stock images showing the unique Light Bone/Black Nike Sport band paired with the space gray Apple Watch Series 2. The unique Light Bone/Black Nike Sport band matches the new Nike watch face color introduced through the recent watchOS 3.2 update.

Pricing details have not been shared yet, but Apple Watch Nike+ goes for the same 38mm $369 and 42mm $399 cost as Apple Watch Series 2. And even though Apple and Nike recently launched standalone versions of the previously exclusive Nike Sport bands, it’s not likely that we’ll see this color option appear anytime soon.

Exactly for how long Apple Watch NikeLab will be available is also unknown at this point, although the limited edition nature of the color combination suggests it may be temporary.

Check out the full gallery on Highsnobiety‘s website.

http://www.androidpolice.com/2017/04/26/google-app-7-1-beta-is-testing-a-search-bar-in-the-notification-panel-and-may-begin-showing-pollen-counts-apk-teardown/

Google App 7.1 beta is testing a search bar in the notification panel and may begin showing pollen counts [APK Teardown]

The Google app is responsible for a pretty staggering number of features ranging from the standard search interface, a voice assistant, and even a fully functional launcher. The latest update coming through the beta channel doesn’t seem to have any new features going live quite yet, but there are a lot of changes to text and other resources, including the removal of all references to Bisto from the last teardown. Most of them are minor tweaks to features we’ve covered in the past, but a couple point to things we haven’t explored yet. Users can look forward to pollen counts with their local weather and possibly a search bar located right in the notification shade.

Teardown

Disclaimer: Teardowns are based on evidence found inside of apks (Android’s application package) and are necessarily speculative and usually based on incomplete information. It’s possible that the guesses made here are totally and completely wrong. Even when predictions are correct, there is always a chance that plans could change or may be canceled entirely. Much like rumors, nothing is certain until it’s officially announced and released.The features discussed below are probably not live yet, or may only be live for a small percentage of users. Unless stated otherwise, don’t expect to see these features if you install the apk.

Search bar in the notification panel

It looks like Google is testing a new location for the quick search bar. A new setting in the apk describes placing a “search bar in the notification panel.” The setting doesn’t appear to be live yet, even on the Android O developer preview, which isn’t surprising since it’s marked as a beta.

Before you assume this is going to be just another quick access tile like the toggles for Bluetooth and Wi-Fi, let’s be clear that the phrasing specifically says the search bar is in the notification panel and the word “tile” is never used. It could be worded incorrectly, which sometimes happens with unofficial features, but this sounds too specific to be a mistake.

CODE

<string name=”search_notification_setting_title”>Search bar (beta)</string>
<string name=”search_notification_setting_summary”>Show quick search bar in the notification panel</string>

<string name=”accessibility_notification_google_search_button”>Google Search</string>
<string name=”accessibility_notification_settings_button”>Notification Settings</string>
<string name=”accessibility_notification_voice_search_button”>Voice Search</string>

res/xml/search_notification_preferences.xml
<PreferenceScreen
xmlns:android=”http://schemas.android.com/apk/res/android”>
<SwitchPreference android:persistent=”true” android:title=”@string/search_notification_setting_title” android:key=”persistent_search_notification_enabled” android:summary=”@string/search_notification_setting_summary”android:defaultValue=”false” />
</PreferenceScreen>

Since the Android OS only allows very limited access to the UI of a notification, this probably won’t be implemented as a regular notification. Contrary to the point I made above, it obviously still could be a simple quick tile that displays a search bar floating on top of whatever screen you’re currently looking at.

I’m going to venture another theory or two, although I admit this is crossing deeper into guesswork. This clue might suggest Google is planning to modify the notification panel itself to include a space for the search bar, possibly inserting it between the quick settings tiles and the actual notifications. I’m not sure many people would want a search bar in that spot, but it’s not implausible.

2017-04-26 10.56.20

We can take that theory one step further, probably crossing into the realm of the unlikely, to raise the possibility Google could be planning to open up the notification panel to app developers even further and allow full-width widgets to be placed there. There are certain things that can’t be done with either quick settings the tiles or regular notifications, so there is at least some justification for adding this capability, but it still feels unlikely. We might find out more about this as Android O gets closer to launch.

Regardless of the specific implementation method, we can return to the original question of whether or not most users would want a search bar in the notification panel. Would it be redundant if there could already be a search bar on their launcher and long-pressing on the Home button can open up Assistant, or would this just give us another option if those don’t fit our workflow? Since it’s a beta, we may not get much more about this for a while.

Pollen counts

In my neck of the woods, as with the rest of the northern hemisphere, we’re in the middle of Spring. That means we’re enjoying warmer temperatures, sunnier skies, and longer days. For some, it’s also time for unending fits of sneezing, itchy eyes, and all of the symptoms of hay fever.

I don’t personally suffer from this, but plenty of people do, and keeping tabs on the pollen count can be helpful for making plans to keep the allergies in check. Judging by a new image in the Google app, this information is about to get a little more visible.

ic_pollen_count

Titled ic_pollen_count.png, it’s pretty clear what this flower image (that looks like it’s sweating) will be used for. I expect it to show up with the standard weather predictions, much like it does on some other trackers and news outlets.

It’s a simple feature, but one that will surely be invaluable to millions of users around the world. Seeing as we’re already in the middle of Spring, there’s no better time for this information to go live, and probably could at any time. Hopefully it’s sooner rather than later.

Download

The APK is signed by Google and upgrades your existing app. The cryptographic signature guarantees that the file is safe to install and was not tampered with in any way. Rather than wait for Google to push this download to your devices, which can take days, download and install it just like any other APK.

Version: 7.1.21 beta

Google
Developer: Google Inc.
Price: Free

http://www.kurzweilai.net/quadriplegia-patient-uses-brain-computer-interface-to-move-his-arm-by-just-thinking

Quadriplegia patient uses brain-computer interface to move his arm by just thinking

New Braingate design replaces robot arm with muscle-stimulating system
April 26, 2017

Bill Kochevar, who was paralyzed below his shoulders in a bicycling accident eight years ago, is the first person with quadriplegia to have arm and hand movements restored without robot help (credit: Case Western Reserve University/Cleveland FES Center)

A research team led by Case Western Reserve University has developed the first implanted brain-recording and muscle-stimulating system to restore arm and hand movements for quadriplegic patients.*

In a proof-of-concept experiment, the system included a brain-computer interface with recording electrodes implanted under his skull and a functional electrical stimulation (FES) system that activated his arm and hand — reconnecting his brain to paralyzed muscles.

The research was part of the ongoing BrainGate2 pilot clinical trial being conducted by a consortium of academic and other institutions to assess the safety and feasibility of the implanted brain-computer interface (BCI) system in people with paralysis. Previous Braingate designs required a robot arm.

In 2012 research, Jan Scheuermann, who has quadriplegia, was able to feed herself using a brain-machine interface and a computer-driven robot arm (credit: UPMC)

Kochevar’s eight years of muscle atrophy first required rehabilitation. The researchers exercised Kochevar’s arm and hand with cyclical electrical stimulation patterns. Over 45 weeks, his strength, range of motion. and endurance improved. As he practiced movements, the researchers adjusted stimulation patterns to further his abilities.

To prepare him to use his arm again, Kochevar learned how to use his own brain signals to move a virtual-reality arm on a computer screen. The team then implanted the FES systems’ 36 electrodes that animate muscles in the upper and lower arm, allowing him to move the actual arm.

Kochevar can now make each joint in his right arm move individually. Or, just by thinking about a task such as feeding himself or getting a drink, the muscles are activated in a coordinated fashion.

Neural activity (generated when Kochevar imagines movement of his arm and hand) is recorded from two 96-channel microelectrode arrays implanted in the motor cortex, on the surface of the brain. The implanted brain-computer interface translates the recorded brain signals into specific command signals that determine the amount of stimulation to be applied to each functional electrical stimulation (FES) electrode in the hand, wrist, arm, elbow and shoulder, and to a mobile arm support. (credit: A Bolu Ajiboye et al./The Lancet)

“Our research is at an early stage, but we believe that this neuro-prosthesis could offer individuals with paralysis the possibility of regaining arm and hand functions to perform day-to-day activities, offering them greater independence,” said lead author Dr Bolu Ajiboye, Case Western Reserve University. “So far, it has helped a man with tetraplegia to reach and grasp, meaning he could feed himself and drink. With further development, we believe the technology could give more accurate control, allowing a wider range of actions, which could begin to transform the lives of people living with paralysis.”

Work is underway to make the brain implant wireless, and the investigators are improving decoding and stimulation patterns needed to make movements more precise. Fully implantable FES systems have already been developed and are also being tested in separate clinical research.

A study of the work was published in the The Lancet March 28, 2017.

Writing in a linked Comment to The Lancet, Steve Perlmutter, M.D., University of Washington, said: “The goal is futuristic: a paralysed individual thinks about moving her arm as if her brain and muscles were not disconnected, and implanted technology seamlessly executes the desired movement… This study is groundbreaking as the first report of a person executing functional, multi-joint movements of a paralysed limb with a motor neuro-prosthesis. However, this treatment is not nearly ready for use outside the lab. The movements were rough and slow and required continuous visual feedback, as is the case for most available brain-machine interfaces, and had restricted range due to the use of a motorised device to assist shoulder movements… Thus, the study is a proof-of-principle demonstration of what is possible, rather than a fundamental advance in neuro-prosthetic concepts or technology. But it is an exciting demonstration nonetheless, and the future of motor neuro-prosthetics to overcome paralysis is brighter.”

* The study was funded by the US National Institutes of Health and the US Department of Veterans Affairs. It was conducted by scientists from Case Western Reserve University, Department of Veterans Affairs Medical Center, University Hospitals Cleveland Medical Center, MetroHealth Medical Center, Brown University, Massachusetts General Hospital, Harvard Medical School, Wyss Center for Bio and Neuroengineering. The investigational BrainGate technology was initially developed in the Brown University laboratory of John Donoghue, now the founding director of the Wyss Center for Bio and Neuroengineering in Geneva, Switzerland. The implanted recording electrodes are known as the Utah array, originally designed by Richard Normann, Emeritus Distinguished Professor of Bioengineering at the University of Utah. The report in Lancet is the result of a long-running collaboration between Kirsch, Ajiboye and the multi-institutional BrainGate consortium. Leigh Hochberg, a neurologist and neuroengineer at Massachusetts General Hospital, Brown University and the VA RR&D Center for Neurorestoration and Neurotechnology in Providence, Rhode Island, directs the pilot clinical trial of the BrainGate system and is a study co-author.


Case | Man with quadriplegia employs injury bridging technologies to move again – just by thinking


Abstract of Restoration of reaching and grasping movements through brain-controlled muscle stimulation in a person with tetraplegia: a proof-of-concept demonstration

Background: People with chronic tetraplegia, due to high-cervical spinal cord injury, can regain limb movements through coordinated electrical stimulation of peripheral muscles and nerves, known as functional electrical stimulation (FES). Users typically command FES systems through other preserved, but unrelated and limited in number, volitional movements (eg, facial muscle activity, head movements, shoulder shrugs). We report the findings of an individual with traumatic high-cervical spinal cord injury who coordinated reaching and grasping movements using his own paralysed arm and hand, reanimated through implanted FES, and commanded using his own cortical signals through an intracortical brain–computer interface (iBCI).

Methods: We recruited a participant into the BrainGate2 clinical trial, an ongoing study that obtains safety information regarding an intracortical neural interface device, and investigates the feasibility of people with tetraplegia controlling assistive devices using their cortical signals. Surgical procedures were performed at University Hospitals Cleveland Medical Center (Cleveland, OH, USA). Study procedures and data analyses were performed at Case Western Reserve University (Cleveland, OH, USA) and the US Department of Veterans Affairs, Louis Stokes Cleveland Veterans Affairs Medical Center (Cleveland, OH, USA). The study participant was a 53-year-old man with a spinal cord injury (cervical level 4, American Spinal Injury Association Impairment Scale category A). He received two intracortical microelectrode arrays in the hand area of his motor cortex, and 4 months and 9 months later received a total of 36 implanted percutaneous electrodes in his right upper and lower arm to electrically stimulate his hand, elbow, and shoulder muscles. The participant used a motorised mobile arm support for gravitational assistance and to provide humeral abduction and adduction under cortical control. We assessed the participant’s ability to cortically command his paralysed arm to perform simple single-joint arm and hand movements and functionally meaningful multi-joint movements. We compared iBCI control of his paralysed arm with that of a virtual three-dimensional arm. This study is registered with ClinicalTrials.gov, number NCT00912041.

Findings: The intracortical implant occurred on Dec 1, 2014, and we are continuing to study the participant. The last session included in this report was Nov 7, 2016. The point-to-point target acquisition sessions began on Oct 8, 2015 (311 days after implant). The participant successfully cortically commanded single-joint and coordinated multi-joint arm movements for point-to-point target acquisitions (80–100% accuracy), using first a virtual arm and second his own arm animated by FES. Using his paralysed arm, the participant volitionally performed self-paced reaches to drink a mug of coffee (successfully completing 11 of 12 attempts within a single session 463 days after implant) and feed himself (717 days after implant).

Interpretation: To our knowledge, this is the first report of a combined implanted FES+iBCI neuroprosthesis for restoring both reaching and grasping movements to people with chronic tetraplegia due to spinal cord injury, and represents a major advance, with a clear translational path, for clinically viable neuroprostheses for restoration of reaching and grasping after paralysis.

Funding: National Institutes of Health, Department of Veterans Affairs.

https://www.raspberrypi.org/blog/support-raspberry-jam-community/

SUPPORTING AND GROWING THE RASPBERRY JAM COMMUNITY

For almost five years, Raspberry Jams have created opportunities to welcome new people to the Raspberry Pi community, as well as providing a support network for people of all ages in digital making. All around the world, like-minded people meet up to discuss and share their latest projects, give workshops, and chat about all things Pi. Today, we are making it easier than ever to set up your own Raspberry Jam, thanks to a new Jam Guidebook, branding pack, and starter kit.

Raspberry Jam logo over world map

We think Jams provide lots of great learning opportunities and we’d like to see one in every community. We’re aware of Jams in 43 countries: most recently, we’ve seen new Jams start in Thailand, Trinidad and Tobago, and Honduras! The community team has been working on a plan to support and grow the amazing community of Jam makers around the world. Now it’s time to share the fantastic resources we have produced with you.

THE RASPBERRY JAM GUIDEBOOK

One of the things we’ve been working on is a comprehensive Raspberry Jam Guidebook to help people set up their Jam. It’s packed full of advice gathered from the Raspberry Pi community, showing the many different types of Jam and how you can organise your own. It covers everything from promoting and structuring your Jam to managing finances: we’re sure you’ll find it useful. Download it now!

Image of Raspberry Jam Guidebook

BRANDING PACK

One of the things many Jam organisers told us they needed was a set of assets to help with advertising. With that in mind, we’ve created a new branding pack for Jam organisers to use in their promotional materials. There’s a new Raspberry Jam logo, a set of poster templates, a set of graphical assets, and more. Download it now!

STARTER KITS

Finally, we’ve put together a Raspberry Jam starter kit containing stickers, flyers, printed worksheets, and lots more goodies to help people run their first Jam. Once you’ve submitted your first event to our Jam map, you can apply for your starter kit. Existing Jams won’t miss out either: they can apply for a kit when they submit their next event.

Image of Raspberry Jam starter kit contents

FIND A JAM NEAR YOU!

Take a look at the Jam map and see if there’s an event coming up near you. If you have kids, Jams can be a brilliant way to get them started with coding and making.

CAN’T FIND A LOCAL JAM? START ONE!

If you can’t find a Jam near you, you can start your own. You don’t have to organise it by yourself. Try to find some other people who would also like a Jam to go to, and get together with them. Work out where you could host your Jam and what form you’d like it to take. It’s OK to start small: just get some people together and see what happens. It’s worth looking at the Jam map to see if any Jams have happened nearby: just check the ‘Past Events’ box.

We have a Raspberry Jam Slack team where you can get help from other Jam organisers. Feel free to get in touch if you would like to join: just email jam@raspberrypi.org and we’ll get back to you. You can also contact us if you need further support in general, or if you have feedback on the resources.

THANKS

Many thanks to everyone who contributed to the guidebook and provided insights in the Jam survey. Thanks, too, to all Jam makers and volunteers around the world who do great work providing opportunities for people everywhere!

http://www.ctvnews.ca/health/aerobic-exercise-resistance-training-tai-chi-can-benefit-brain-power-in-the-over-50s-1.3383925

Aerobic exercise, resistance training, tai chi can benefit brain power in the over 50s

Aerobic exerciseA new review has found that aerobic exercise and resistance training can boost brain power in the over 50s. (Mypurgatoryyears / Istock.com)

A new review has found that a combination of both aerobic and resistance exercise can significantly boost brain power in the over 50s.

Carried out by researchers from the University of Canberra, Australia, the new research is the most comprehensive review of the available evidence to date, with the team looking at 39 studies published up to the end of 2016.

The team analyzed the effect of various types, intensities, and durations of exercise on the brain health of the over 50s, including aerobic exercise, resistance training such as weights, multi-component exercise that combines both aerobic and resistance training, tai chi and yoga.

Cognitive abilities assessed in the review included overall brain capacity, attention, executive function (mental process which help achieve goals), memory, and working memory (processing information in the short-term).

The results showed that aerobic exercise significantly improved cognitive abilities, with resistance training having a significant effect on executive function, memory, and working memory.

For those wondering how much they need to do, the team found that a session lasting between 45 and 60 minutes, of moderate to vigorous intensity, and of any frequency, had a positive effect on cognitive function.

In addition, these positive effects were seen no matter what the current state of the participant’s brain health.

The team also found that tai chi helped improved cognitive abilities, supporting previous findings. The team did point out however that the analysis was based on just a few studies and the findings would need to be confirmed in a larger study, although the results do suggest that exercises such as tai chi could provide benefits for those who are unable to do more intense forms of physical activity.

The team now believe that the evidence from their review is strong enough to recommend prescribing both aerobic and resistance exercises to improve brain health in the over 50s.

https://www.nature.com/news/dog-family-tree-reveals-hidden-history-of-canine-diversity-1.21885

Dog family tree reveals hidden history of canine diversity

Genetic map showing how dog breeds are related provides a wealth of information about their origins.

25 April 2017

Article tools

Rights & Permissions

Matt Cardy/Getty

What a tangled web humans have weaved for domestic dog breeds.

A new family tree of dogs containing more than 160 breeds reveals the hidden history of man’s best friend, and even shows how studying canine genomes might help with research into human disease.

In a study published on 25 April in Cell Reports, scientists examined the genomes of 1,346 dogs to create one of the most diverse maps produced so far tracing the relationship between breeds1. The map shows the types of dog that people crossed to create modern breeds and reveals that canines bred to perform similar functions, such as working and herding dogs, don’t necessarily share the same origins. The analysis even hints at an ancient type of dog that could have come over to the Americas with people thousands of years before Christopher Columbus arrived in the New World.

The new work could come as a surprise to owners and breeders who are familiar with how dogs are grouped into categories. “You would think that all working dogs or all herding dogs are related, but that isn’t the case,” says Heidi Parker, a biologist at the US National Institutes of Health (NIH) in Bethesda, Maryland, and a study author.

When geneticists tried to map out herding-dog lineages in the past, they couldn’t do so accurately. Parker and Elaine Ostrander, also a biologist at the NIH and a study author, say that this was because herding dogs emerged through selective breeding at multiple times and in many different places.

“In retrospect, that makes sense,” says Ostrander. “What qualities you’d want in a dog that herds bison are different from mountain goats, which are different from sheep, and so on.”

Coming to America

Most of the breeds in the study arose from dog groups that originated in Europe and Asia. But domestic dogs came to the Americas thousands of years ago, when people crossed the Bering land bridge linking Alaska and Siberia. These New World dogs later disappeared when European and Asian dogs arrived in the Americas. Researchers have looked for the genetic legacy of these ancient canines in the DNA of modern American breeds, but have found little evidence until now.

The way that two South American breeds, the Peruvian hairless dog and the xoloitzcuintli, clustered together on the family tree suggested to Ostrander and Parker that those animals could share genes not found in any of the other breeds in their analysis. Parker thinks that those genes could have come from dogs that were present in the Americas before Columbus’s arrival.

“I think our view of the formation of modern dog breeds has historically been one-dimensional,” says Bob Wayne, an evolutionary biologist at the University of California, Los Angeles. “We didn’t consider that the process has a deep historical legacy.”

That extends to what was probably the first period of domestication for canines in hunter-gatherer times. Ostrander and Parker think that dog breeds underwent two major periods of diversification. Thousands of years ago, dogs were selected for their skills, whereas a few hundred years ago, the animals were bred for physical traits.

“You would never be able to find something like this with cows or cats,” says Wayne, “We haven’t done this kind of intense deliberate breeding with anything but dogs.”

Although the latest study can help researchers to better understand the history of the domestic dog, there are several practical reasons for creating a database such as that produced by Ostrander, Parker and their colleagues. One reason is that it can help in diagnosing illnesses in domestic dogs. Another is that it can aid the study of human diseases.

Dogs and people can suffer from similar conditions, such as epilepsy. In humans, there might be hundreds of genes that can influence that illness. However, because dog breeds are relatively genetically isolated, each breed might carry only one or two of the genes involved in epilepsy, says Ostrander. “By studying dogs, we can we look at each [gene] individually. It’s much more efficient.”

Nature
doi:10.1038/nature.2017.21885