https://www.iflscience.com/technology/new-brain-implant-allows-paralyzed-patients-to-surf-the-internet-using-their-thoughts/

New Brain Implant Allows Paralyzed Patients To Surf The Internet Using Their Thoughts

New Brain Implant Allows Paralyzed Patients To Surf The Internet Using Their Thoughts

TWO OF THE PATIENTS IN THE TRIAL CHATTING WITH EACH OTHER. BRAINGATE COLLABORATION

ADVERTISMENT

Brand-new research has shown that paralyzed patients can control an off-the-shelf tablet using chip implants connected to their brains. The brain-computer interface (BCI) allowed subjects to move a cursor and click using nothing more than their thoughts.

This is an important breakthrough. The three patients suffered from tetraplegia, which made them unable to use their limbs. Two had amyotrophic lateral sclerosis (ALS) and the other had a spinal cord injury. Thanks to this particular BCI, they were able to use email, chat, music, and video-streaming apps. They were able to navigate the web and perform tasks such as online shopping with ease. They could even play a virtual piano. The findings are reported in the journal PLOS ONE.

“It was great to see our participants make their way through the tasks we asked them to perform, but the most gratifying and fun part of the study was when they just did what they wanted to do – using the apps that they liked for shopping, watching videos or just chatting with friends,” lead author Dr Paul Nuyujukian, a bioengineer at Stanford, said in a statement. “One of the participants told us at the beginning of the trial that one of the things she really wanted to do was play music again. So to see her play on a digital keyboard was fantastic.”

The work was done by the BrainGate collaboration, which has worked to make BCIs a reality for many years. The chip is the size of a small pill and is placed in the brain’s motor cortex. The sensor registers neural activity linked to intended movements. This information is then decoded and sent to external devices. The same approach by BrainGate and other groups has allowed people to move robotic limbs.

“For years, the BrainGate collaboration has been working to develop the neuroscience and neuroengineering know-how to enable people who have lost motor abilities to control external devices just by thinking about the movement of their own arm or hand,” said Dr Jaimie Henderson, a senior author of the paper and a Stanford University neurosurgeon. “In this study, we’ve harnessed that know-how to restore people’s ability to control the exact same everyday technologies they were using before the onset of their illnesses. It was wonderful to see the participants express themselves or just find a song they want to hear.”

The approach will allow paralyzed people to communicate more easily with their family and friends. It will also enable them to help their caregivers to make better decisions regarding their ongoing health issues. This technology could dramatically improve the quality of life for many people.

https://www.vox.com/the-goods/2018/11/26/18112631/cyber-monday-amazon-alexa-google-voice-assistant-war

Smart speakers are everywhere this holiday season, but they’re really a gift for big tech companies

Which voice assistant will get the warmest welcome this year?

Everyone knows that there is, each holiday season, a gift that says, “I know nothing about you, but I love you, I mean, you get it.” For a very long time, this present was an iTunes gift card; Apple is the richest company in the world, and I am pretty sure this is exclusively thanks to the fortune it amassed from iTunes gift cards purchased for nephews and hairdressers in the first decade of the millennium. Prior to iTunes gift cards, the gift was maybe a sweater.

Now, I’m sorry to say, the comparable gift is a smart speaker. We keep purchasing them for each other, buying into the fantasy that Siri or Alexa or Google can make someone’s life easier by scheduling their appointments and managing their time and telling them how to put on makeup or make a butternut squash lasagna. Though, at the moment, reports say that people mostly just use them to listen to music, check the weather, and ask “fun questions.”

As nondescript gifts, smart speakers make a lot of sense: Both Amazon and Google have options that are around $50, there is at least some novelty factor that pokes at adults’ memories of receiving toys, and they are far less rude to give than a Fitbit. Plus, for Amazon and Google in particular, with 64.5 percent and 19.6 percent shares in the category, respectively, the point isn’t really to make money off selling hardware. The point is to beat the others at integrating their services into the lives of the population.

In other words: You’re not gifting an Amazon Echo; you’re gifting a relationship with Alexa. Amazon can later sell that relationship to brands that hope Alexa users will order their products with their voice. You’re not gifting a Google Home; you’re gifting a closer entwining with Google Search and all the strange personalized add-ons to Calendar and Maps.

This expansion of the voice assistant ecosystem is crucial to almost every major tech company, far more so than getting sticker price for devices that look like high-end air fresheners, and if you don’t believe me, please peruse the ridiculously marked-down Black Friday and Cyber Monday deals they are all offering this year.

Amazon

According to predictions from the Consumer Technology Association, shoppers are set to spend $96.1 billion on tech presents this year, up 3.4 percent from 2017. In the US, 66 percent of adults will buy some sort of gadget as a gift, and the CTA expects that 22 million of these gifts will be smart speakers. Overall, 12 percent of shoppers plan to buy some kind of voice assistant-enabled smart speaker, and 6 percent plan to buy a speaker that also has a screen — like Amazon’s recently updated Echo Show or Google’s just-released Home Hub.

 

https://venturebeat.com/2018/11/24/before-you-launch-your-machine-learning-model-start-with-an-mvp/

Before you launch your machine learning model, start with an MVP

Image Credit: everything possible/Shutterstock

I’ve seen a lot of failed machine learning models in the course of my work. I’ve worked with a number of organizations to build both models and the teams and culture to support them. And in my experience, the number one reason models fail is because the team failed to create a minimum viable product (MVP).

In fact, skipping the MVP phase of product development is how one legacy corporation ended up dissolving its entire analytics team. The nascent team followed the lead of its manager and chose to use a NoSQL database, despite the fact no one on the team had NoSQL expertise. The team built a model, then attempted to scale the application. However, because it tried to scale its product using technology that was inappropriate for the use case, it never delivered a product to its customers. The company leadership never saw a return on its investment and concluded that investing in a data initiative was too risky and unpredictable.

If that data team had started with an MVP, not only could it have diagnosed the problem with its model but it could also have switched to the cheaper, more appropriate technology alternative and saved money.

In traditional software development, MVPs are a common part of the “lean” development cycle; they’re a way to explore a market and learn about the challenges related to the product. Machine learning product development, by contrast, is struggling to become a lean discipline because it’s hard to learn quickly and reliably from complex systems.

Yet, for ML teams, building an MVP remains an absolute must. If the weakness in the model originates from bad data quality, all further investments to improve the model will be doomed to failure, no matter the amount of money thrown at the project. Similarly, if the model underperforms because it was not deployed or monitored properly, then any money spent on improving data quality will be wasted. Teams can avoid these pitfalls by first developing an MVP and by learning from failed attempts.

Return on investment in machine learning

Machine learning initiatives require tremendous overhead work, such as the design of new data pipelines, data management frameworks, and data monitoring systems. That overhead work causes an ‘S’-shaped return-on-investment curve, which most tech leaders are not accustomed to. Company leaders who don’t understand that this S-shaped ROI is inherent to machine learning projects could abandon projects prematurely, judging them to be failures.

Unfortunately, prematurely terminating a project happens in the “building the foundations” phase of the ROI curve, and many organizations never allow their teams to progress far enough into the next phases.

Failed models offer good lessons

Identifying the weaknesses of any product sooner rather than later can result in hundreds of thousands of dollars in savings. Spotting potential shortcomings ahead of time is even more important with data products, because the root causes for a subpar recommendation system, for instance, could be anything from technology choices to data quality and/or quantity to model performance to integration, and more. To avoid bleeding resources, early diagnosis is key.

For instance, by foregoing the MVP stage of machine learning development, one company deploying a new search algorithm missed the opportunity to identify the poor quality of its data. In the process, it lost customers to the competition and had to not only fix its data collection process but eventually redo every subsequent step, including model development. This resulted in investments in the wrong technologies and six months’ worth of man hours for a team of 10 engineers and data scientists. It also led to the resignation of several key members on that team. Each departed employee cost $70,000 per person to replace.

In another example, a company leaned too heavily on A/B testing to determine the viability of its model. A/B tests are an incredible instrument for probing the market; they are a particularly relevant tool for machine learning products, as those products are often built using theoretical metrics that do not always closely relate to real-life success. However, many companies use A/B tests to identify the weaknesses in their machine learning algorithms. By using A/B tests as a quality assurance (QA) checkpoint, companies miss the opportunity to stop poorly developed models and systems in their tracks before sending a prototype to production. The typical ML prototype takes 12 to 15 engineer-weeks to turn into a real product. Based on that projection, failing to first create an MVP will typically result in a loss of over $50,000 if the final product isn’t successful.

The investment you’re protecting

Personnel costs are just one consideration. Let’s step back and discuss the wider investment in AI that you need to protect by first building an MVP.

Data collection. Data acquisition costs will vary based on the type of product your building and how frequently you’re gathering and updating data. If you are developing an application for an IoT device, you will have to identify which data to keep on the edge vs. which data to store remotely on the cloud where your team can do a lot of R&D work on it. If you are in the eCommerce business, gathering data will mean adding new front-end instrumentation to your website, which will unquestionably slow down the response time and degrade the overall user experience, potentially costing you customers.

Data pipeline building. The creation of pipelines to transfer data is fortunately a one-time initiative, but it is also a costly and time-consuming one.

Data storage. The consensus for a while now has been that data storage is being progressively commoditized. However, there are more and more indications that Moore’s Law just isn’t enough anymore to make up for the growth rate of the volumes of data we collect. If those trends prove true, storage will become increasingly expensive and will require that we stick to the bare minimum: only the data that is truly informational and actionable.

Data cleaning. With volumes always on the rise, the amount of data that is available to data scientists is becoming both an opportunity and a liability. Separating the wheat from the chaff is often difficult and time-consuming. And since these decisions typically need to be made by the data scientist in charge of developing the model, the process is all the more costly.

Data annotation. Using larger amounts of data requires more labels, and using crowds of human annotators isn’t enough anymore. Semi-automated labeling and active learning are becoming increasingly attractive to many companies, especially those with very large volumes of data. However the licenses to those platforms can represent a substantial add to the entire price of your ML initiative, especially when your data shows important seasonal patterns and needs to be relabeled regularly.

Compute power. Just like data storage, computer power is becoming commoditized, and many companies opt for cloud-based solutions such as AWS or GCP. However, with large volumes of data and complex models, the bill can become a considerable part of the entire budget and can sometimes even require a hefty investment in a server solution.

Modeling cost. The model development phase accounts for the most unpredictable cost in your final bill because the amount of time required to build a model depends on many different factors: the skill of your ML team, problem complexity, required accuracy, data quality, time constraints, and even luck. Hyperparameter tuning for deep learning is making things even more hectic, as this phase of development benefits little from experience, and usually only a trial-and-error approach prevails. Typical models will take about six weeks of development for a mid-level data scientist, so that’s about $15K in salary alone.

Deployment cost. Depending on the organization, this phase can either be fast or slow. If the company is mature from an ML-perspective and already has a standardized path to production, deploying a model will likely take about two weeks of time by an ML engineer, so about $5K. However, more often than not, you’ll require custom work, and that can make the deployment phase the most time-consuming and expensive part of creating a live ML MVP.

The cost of diagnosis

Recent years have seen an explosion in the number of ML projects powered by deep learning architectures. But along with the fantastic promise of deep learning comes the most frightening challenge in machine learning: lack of explainability. Deep learning models can have tens, if not hundreds of thousands, of parameters, and this makes it impossible for data scientists to use intuition when trying to diagnose problems with the system. This is likely one of the chief reasons ineffective models are taken offline rather than fixed and improved. If, after weeks waiting for the ML team to diagnose a mistake, they still can’t find the problem, it’s easiest to move on and start over.

And because most data scientists are trained as researchers rather than engineers, their core expertise as well as their interest rarely lies in improving systems but rather in exploring new ideas. Pushing your data science experts to spend most of their time “fixing” things (which could cost you 70 percent of your R&D budget) could considerably increase the churn among them. Ultimately, debugging, or even incremental improvement of an ML MVP can prove much more costly than a similarly-sized “traditional” software engineering MVP.

Yet ML MVPs remain an absolute must, because if the weakness in the model originates in the bad quality of the data, all further investments to improve the model will be doomed to failure, no matter how much money you throw at the project. Similarly, if the model underperforms because it was not deployed or monitored properly, then any money spent on improving data quality will be wasted.

How to succeed with an MVP

But there is hope. It is just a matter of time until the lean methodology that has seen huge success within the software development community proves itself useful for machine learning projects as well. For this to happen, though, we’ll have to see a shift in mindset among data scientists, a group known to value perfectionism over short time-to-market. Business leaders will also need to understand the subtle differences between an engineering and a machine learning MVP:

Data scientists need to evaluate the data and the model separately. The fact that the application is not providing the desired results might be caused by one or the other, or both, and diagnosing can never converge unless data scientists keep this fact in mind. Because data scientists now have the option of improving their data collecting process, they can do justice to those models that would have been otherwise identified as hopeless.

Be patient with ROI. Because the ROI curve of ML is S-shaped, even MVPs require more way work than you could typically anticipate. As we have seen, ML products require many complex steps to reach completion, and this is something that needs to be profusely communicated to stakeholders to limit the risk of frustration and premature abandonment of a project.

Diagnosing is costly but critical. Debugging ML systems is almost always extremely time-consuming, in particular because of the lack of explainability in many modern models (DL). Building from scratch is cheaper but is a worse financial bet because humans have a natural tendency to repeat the same mistakes anyway. Obtaining the right diagnostic will ensure your ML team knows with precision what requires attention (whether it be the data, the model, or the deployment), allowing you to prevent the costs of the project from exploding. Diagnosing problems also gives your team the opportunity to learn valuable lessons from their mistakes, potentially shortening future project cycles. Failed models can be a mine of information; redesigning from scratch is thus often a lost opportunity.

Make sure no single person has the keys to your project. Unfortunately, extremely short tenures are the norm among machine learning employees. When key team members leave a project, its problems are even harder to diagnose, so company leaders must ensure that “tribal” knowledge is not owned by any one single person on the team. Otherwise, even the most promising MVPs will have to be abandoned. Make sure that once your MVP is ready for the market, you start gathering data as fast as possible and that learnings from the project are shared with your entire team.

No shortcuts

No matter how long you have worked in the field, ML models are daunting, especially when the data is highly dimensional and high volume. For the highest chances of success, you need to test your model early with an MVP and invest the necessary time and money in diagnosing and fixing its weaknesses. There are no shortcuts.

Jennifer Prendki is VP of Machine Learning at Figure Eight.

https://singularityhub.com/2018/11/24/what-happens-to-the-brain-in-zero-gravity/

What Happens to the Brain in Zero Gravity?

NASA has made a commitment to send humans to Mars by the 2030s. This is an ambitious goal when you think that a typical round trip will anywhere between three and six months and crews will be expected to stay on the red planet for up to two years before planetary alignment allows for the return journey home. It means that the astronauts have to live in reduced (micro) gravity for about three years, well beyond the current record of 438 continuous days in space held by the Russian cosmonaut Valery Polyakov.

In the early days of space travel, scientists worked hard to figure out how to overcome the force of gravity so that a rocket could catapult free of Earth’s pull in order to land humans on the Moon. Today, gravity remains at the top of the science agenda, but this time we’re more interested in how reduced gravity affects the astronauts’ health, especially their brains. After all, we’ve evolved to exist within Earth’s gravity (1 g), not in the weightlessness of space (0 g) or the microgravity of Mars (0.3 g).

So exactly how does the human brain cope with microgravity? Poorly, in a nutshell—although information about this is limited. This is surprising, since we’re familiar with astronauts’ faces becoming red and bloated during weightlessness—a phenomenon affectionately known as the “Charlie Brown effect”, or “puffy head bird legs syndrome”. This is due to fluid consisting mostly of blood (cells and plasma) and cerebrospinal fluid shifting towards the head, causing them to have round, puffy faces and thinner legs.

These fluid shifts are also associated with space motion sickness, headaches, and nausea. They have also, more recently, been linked to blurred vision due to a build up of pressure as blood flow increases and the brain floats upward inside the skull—a condition called visual impairment and intracranial pressure syndrome. Even though NASA considers this syndrome to be the top health risk for any mission to Mars, figuring out what causes it and—an even tougher question—how to prevent it, still remains a mystery.

So where does my research fit into this? Well, I think that certain parts of the brain end up receiving way too much blood because nitric oxide—an invisible molecule which is usually floating around in the blood stream—builds up in the bloodstream. This makes the arteries supplying the brain with blood relax, so that they open up too much. As a result of this relentless surge in blood flow, the blood-brain barrier (the brain’s “shock absorber”) may become overwhelmed. This allows water to slowly build up (a condition called oedema), causing brain swelling and an increase in pressure that can also be made worse due to limits in its drainage capacity.

Think of it like a river overflowing its banks. The end result is that not enough oxygen gets to parts of the brain fast enough. This a big problem which could explain why blurred vision occurs, as well as effects on other skills including astronauts’ cognitive agility (how they think, concentrate, reason and move).

A Trip in the ‘Vomit Comet’

To work out whether my idea was right, we needed to test it. But rather than ask NASA for a trip to the moon, we escaped the bonds of Earth’s gravity by simulating weightlessness in a special aeroplane nicknamed the “vomit comet.”

By climbing and then dipping through the air, this plane performs up to 30 of these “parabolas” in a single flight to simulate the feeling of weightlessness. They last only 30 seconds and I must admit, it’s very addictive and you really do get a puffy face!

With all of the equipment securely fastened down, we took measurements from eight volunteers who took a single flight every day for four days. We measured blood flow in different arteries that supply the brain using a portable doppler ultrasound, which works by bouncing high-frequency sound waves off circulating red blood cells. We also measured nitric oxide levels in blood samples taken from the forearm vein, as well as other invisible molecules that included free radicals and brain-specific proteins (which reflect structural damage to the brain) that could tell us if the blood-brain barrier has been forced open.

Our initial findings confirmed what we anticipated. Nitric oxide levels increased following repeated bouts of weightlessness, and this coincided with increased blood flow, particularly through arteries that supply the back of the brain. This forced the blood-brain barrier open, although there was no evidence of structural brain damage.

We’re now planning on following these studies up with more detailed assessments of blood and fluid shifts in the brain using imaging techniques such as magnetic resonance to confirm our findings. We’re also going to explore the effects that countermeasures such as rubber suction trousers—which create a negative pressure in the lower half of the body with the idea that they can help “suck” blood away from the astronaut’s brain—as well as drugs to counteract the increase in nitric oxide.

But these findings won’t just improve space travel—they can also provide valuable information as to why the “gravity” of exercise is good medicine for the brain and how it can protect against dementia and stroke in later life.The Conversation

Damian Bailey, Professor of Physiology and Biochemistry, University of South Wales

This article is republished from The Conversation under a Creative Commons license. Read the original article.

https://physics.aps.org/synopsis-for/10.1103/PhysRevLett.121.213601

Synopsis: Making Mixtures of Magnetic Condensates

A condensate mixing two species of strongly magnetic atoms provides a new experimental window into many-body phenomena.
Synopsis figure

RARE Team/IQOQI

In recent years, researchers have been able to prepare condensates of ultracold atoms with large magnetic moments. In these condensates, the interactions between the atoms’ magnetic dipoles give rise to exotic phases, some of which are analogous to those found in liquid crystals and superfluids. The condensates demonstrated to date have involved single species of magnetic atoms like dysprosium (Dy) or erbium (Er) (see 21 May 2012 Viewpoint). Now, Francesca Ferlaino and colleagues at the Institute for Quantum Optics and Quantum Information, Austria, have produced condensates that mix Dy and Er. The ability to couple two distinct dipolar atomic species will provide an opportunity to explore new quantum behaviors of ultracold gases.

Starting with an atomic beam of Dy and Er, the team used a combination of lasers and magnetic fields to trap the atoms and to cool them by evaporation. Getting both atomic species to condense at the same time, however, entailed a new trick. The researchers used traps whose shape and depth could be tuned so that the more weakly trapped Er would evaporate more easily than Dy, in turn serving as a coolant for the Dy.

Working with different isotopes of Dy and Er—some fermionic, some bosonic—they produced a variety of Bose-Bose or Bose-Fermi quantum mixtures. The authors observed signatures of strong interspecies interaction: For some isotope combinations, the forces between Er and Dy turned out to be repulsive, displacing the two condensates upwards and downwards, respectively, relative to the trap in which they were created. The magnetic mixture may allow researchers to study hard-to-probe quantum phases, such as fermionic superfluids with direction-dependent properties.

This research is published in Physical Review Letters.

–Nicolas Doiron-Leyraud

Nicolas Doiron-Leyraud is a Corresponding Editor at Physics and a researcher at the University of Sherbrooke.

https://www.sciencealert.com/a-hidden-region-of-the-human-brain-was-revealed-while-making-an-atlas

Neuroscientists Have Discovered a Previously Hidden Region in The Human Brain

TESSA KOUMOUNDOUROS
22 NOV 2018

It turns out we humans may have an extra type of thinky bit that isn’t found in other primates. A previously unknown brain structure was identified while scientists carefully imaged parts of the human brain for an upcoming atlas on brain anatomy.

main article image

Neuroscientist George Paxinos and his team at Neuroscience Research Australia (NeuRA) have named their discovery the endorestiform nucleus – because it is located within (endo) the inferior cerebellar peduncle (also called the restiform body). It’s found at the base of the brain, near where the brain meets the spinal cord.

This area is involved in receiving sensory and motor information from our bodies to refine our posture, balance and movements.

“The inferior cerebellar peduncle is like a river carrying information from the spinal cord and brainstem to the cerebellum,” Paxinos told ScienceAlert.

“The endorestiform nucleus is a group of neurons, and it is like an island in this river.”

Neuroscientist Lyndsey Collins-Praino from Adelaide University, who was not involved in the study, told ScienceAlert that Paxinos’ discovery is “intriguing”.

“While one can speculate that the endorestiform nucleus may play a key role in [the functions of the inferior cerebellar peduncle], it is too early to know its true significance,” she added.

Paxinos confirmed the existence of this brain structure while using a relatively new brain staining technique he developed to make images of the brain tissues clearer (and surely also prettier!) for the latest neuroanatomy atlas he has been working on.

These stains target cell products actively being made – chemicals in the brain such as neurotransmitters, providing a map of brain tissues. This helps to differentiate the neuron groups by their function – rather than just the traditional way of separating them by how the cells look – revealing what is known as the chemoarchitecture of the brain.

“The endorestiform nucleus is all too evident by its dense staining for [the enzyme] acetylcholinesterase, all the more evident because the surrounding areas are negative,” Paxinos explained.

“It was nearly the case the nucleus discovered me, than the other way around.”

In fact, Paxinos had been receiving clues that the endorestiform nucleus existed for decades. In a procedure called a therapeutic anterolateral cordotomy – a surgery to achieve relief from extreme and incurable pain by cutting spinal pathways – he and his colleagues had noticed that the long fibres from the spine seemed to end around where the endorestiform nucleus was found.

“It has been staring at me from the anterolateral cordotomies and also from the chemical stains I use in my lab,” he told ScienceAlert.

The location of this elusive brain bit leads Paxinos to suspect it may be involved in fine motor control – something also backed up by the fact that this structure has yet to be identified in other animals, including marmosets or rhesus monkeys.

“I cannot imagine a chimpanzee playing the guitar as dexterously as us, even if they liked to make music,” Paxinos pointed out.

Humans have brains at least twice as big as chimpanzees (1,300 grams vs 600 grams, or 2.9 lbs vs 1.3 lbs), and a larger percentage of our brain neuronal pathways that signal for movement make direct contact with motor neurons – 20 percent compared to 5 percent in other primates.

So, the endorestiform nucleus may be another unique feature in our nervous system, although it’s too soon to tell just yet. Paxinos is set to do some work in chimpanzees soon.

In order to discover what function the endorestiform nucleus might serve, we may have to wait for higher resolution MRIs capable of studying it in a living person.

Comparing the normal brains studied for the atlas with those from people with known abnormalities might also lead to some insights.

“Neuroanatomy is critical for serving as the foundation that we build a knowledge of both normal and abnormal function upon, but, at this time, it is simply impossible to know what implications this discovery may have for neurological or psychiatric disease,” Collins-Praino told ScienceAlert.

“Investigations into the functionality of this nucleus in the coming years will be key in answering these questions.”

Paxinos, who has 52 brain-mapping books under his belt, plans to keep using this new staining technique to thoroughly search our brains for more bits and compare them across species, to obtain a greater understanding on how they work.

This discovery is yet to be examined by peer-review, but details of the new brain area can be found in Paxinos’ latest atlas, titled Human Brainstem: Cytoarchitecture, Chemoarchitecture, Myeloarchitecture.

Learn More

  1. 3D Atlas Will Help Navigate The Spinal Cord
    Prince of Wales Medical Research Institute,ScienceDaily
  2. How neurons control fine motor behavior of the arm
    University of Basel, ScienceDaily
  3. Gene Find Sheds Light On Motor Neuron Diseases Like ALS
    University of Rochester Medical Center, ScienceDaily
  1. Neuromorphic computing enabled by physics of electron spins: Prospects and perspectives
    Abhronil Sengupta et al., Applied Physics Express
  2. Rutherford scattering and the atomic nucleus
    Hans Paetz gen. Schieck et al., Key Nuclear Reaction Experiments
  3. The discovery of the atomic nucleus
    Claude Amsler, Nuclear and Particle Physics

https://medicalxpress.com/news/2018-11-brain.html

How the brain switches between different sets of rules

November 19, 2018, Massachusetts Institute of Technology
brain
Credit: Wikimedia Commons

Cognitive flexibility—the brain’s ability to switch between different rules or action plans depending on the context—is key to many of our everyday activities. For example, imagine you’re driving on a highway at 65 miles per hour. When you exit onto a local street, you realize that the situation has changed and you need to slow down.

When we move between different contexts like this, our brain holds multiple sets of rules in mind so that it can switch to the appropriate one when necessary. These neural representations of task rules are maintained in the , the part of the brain responsible for planning action.

A new study from MIT has found that a region of the thalamus is key to the process of switching between the rules required for different contexts. This region, called the mediodorsal thalamus, suppresses representations that are not currently needed. That suppression also protects the representations as a short-term memory that can be reactivated when needed.

“It seems like a way to toggle between irrelevant and relevant contexts, and one advantage is that it protects the currently irrelevant representations from being overwritten,” says Michael Halassa, an assistant professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research.

Halassa is the senior author of the paper, which appears in the Nov. 19 issue of Nature Neuroscience. The paper’s first author is former MIT graduate student Rajeev Rikhye, who is now a postdoc in Halassa’s lab. Aditya Gilra, a postdoc at the University of Bonn, is also an author.

Changing the rules

Previous studies have found that the prefrontal cortex is essential for cognitive flexibility, and that a part of the thalamus called the mediodorsal thalamus also contributes to this ability. In a 2017 study published in Nature, Halassa and his colleagues showed that the mediodorsal thalamus helps the prefrontal cortex to keep a thought in mind by temporarily strengthening the neuronal connections in the prefrontal cortex that encode that particular thought.

In the new study, Halassa wanted to further investigate the relationship between the mediodorsal thalamus and the prefrontal cortex. To do that, he created a task in which mice learn to switch back and forth between two different contexts—one in which they must follow visual instructions and one in which they must follow auditory instructions.

In each trial, the mice are given both a visual target (flash of light to the right or left) and an auditory target (a tone that sweeps from high to low pitch, or vice versa). These targets offer conflicting instructions. One tells the mouse to go to the right to get a reward; the other tells it to go left. Before each trial begins, the mice are given a cue that tells them whether to follow the visual or auditory target.

“The only way for the animal to solve the task is to keep the cue in mind over the entire delay, until the targets are given,” Halassa says.

The researchers found that thalamic input is necessary for the mice to successfully switch from one context to another. When they suppressed the mediodorsal thalamus during the cuing period of a series of trials in which the context did not change, there was no effect on performance. However, if they suppressed the mediodorsal thalamus during the switch to a different context, it took the mice much longer to switch.

By recording from neurons of the prefrontal cortex, the researchers found that when the mediodorsal thalamus was suppressed, the representation of the old context in the prefrontal cortex could not be turned off, making it much harder to switch to the new context.

In addition to helping the brain switch between contexts, this process also appears to help maintain the neural representation of the context that is not currently being used, so that it doesn’t get overwritten, Halassa says. This allows it to be activated again when needed. The mice could maintain these representations over hundreds of trials, but the next day, they had to relearn the rules associated with each context.

Multitasking AI

The findings could help guide the development of better artificial intelligence algorithms, Halassa says. The human brain is very good at learning many different kinds of tasks—singing, walking, talking, etc. However, neural networks (a type of artificial intelligence based on interconnected nodes similar to neurons) usually are good at learning only one thing. These networks are subject to a phenomenon called “catastrophic forgetting”—when they try to learn a new , previous tasks become overwritten.

Halassa and his colleagues now hope to apply their findings to improve ‘ ability to store previously learned tasks while learning to perform new ones.

 Explore further: Altered brain activity responsible for cognitive symptoms of schizophrenia

More information: Thalamic regulation of switching between cortical representations enables cognitive flexibility, Nature Neuroscience (2018). DOI: 10.1038/s41593-018-0269-z , https://www.nature.com/articles/s41593-018-0269-z

https://www.iphoneincanada.ca/shaw/freedom-mobile-5gb-10gb/

This morning Shaw’s Freedom Mobile debuted a ‘Big Binge’ 100GB LTE data bonus, as part of its Black Friday promos.

The 100GB LTE data bonus is available for new and existing customers who subscribe to a $60 per month or higher plan, while activating a new smartphone on MyTab. This extra 100GB data bonus lets customers consume extra data for free once they use up their allotted monthly data. While Freedom Mobile doesn’t charge data overages, they do throttle data down to 3G speeds once a data bucket is used up.

While the 100GB LTE data bonus has taken the limelight, Freedom Mobile also launched data bonuses across a variety of prepaid and postpaid plans, ranging from 5GB to 30GB.

Here’s what’s available:

  • 5GB – New customers only: Activate a Pay Before service on a current in-market $40/month or higher rate plan.
  • 10GB – New customers only: Activate a Pay After service on a current in-market $40-$49/month rate plan.
  • 15GB – New customers: Activate a Pay After service on a current in-market $50-$59/month rate plan, or a Pay After service with MyTab on a current in-market $40-$49/month rate plan. Existing customers: Upgrade your phone with MyTab on a $40-49/month rate plan.
  • 30GB – New customers: Activate a Pay After service on a current in-market $60/month or higher rate plan, or a Pay After service with MyTab on a current in-market $50-$59/month rate plan. Existing customers: Upgrade your phone with MyTab on a $50-59/month rate plan.

The company’s internal documents (seen by iPhone in Canada) for store employees give example scenarios of how these data bonuses can be applied:

“Raj activates on the Big Gig + Talk 10GB with a BYOP (Pay After) and gets 30GB of bonus data. He goes over his full-speed data by 2GB per month, which means his bonus data will last 15 months!”

Freedom Mobile says “your bonus data will only be used after you have used up all the fast LTE data in your rate plan each month and, once the total amount of bonus data is depleted, it will not be replenished.”

These Freedom Mobile data bonuses look to target customers are who are tired of paying for data overages, which can cost up to $100 per gigabyte from incumbent wireless carriers. Taking advantage of Black Friday as a limited time promo will definitely see people signing up, especially if they are in areas of good network coverage.

https://www.forbes.com/sites/lilachbullock/2018/11/12/the-3-ways-that-artificial-intelligence-will-change-content-marketing/#7703429c618f

The 3 Ways That Artificial Intelligence Will Change Content Marketing

Artificial Intelligence – 3d art image. Technology concept. Abstract backgrounds.GETTY ROYALTY FREE

In many ways, artificial intelligence (AI) is already influencing digital marketing in general, and content marketing in particular. But the truth is, there is so much more to come – so many more changes and improvements that AI will surely bring to content marketing.

In this blog post, I’m going to explore some of these changes in order to try to understand what the future holds – read on to discover the 3 ways that artificial intelligence will change content marketing.

What exactly is artificial intelligence?

Before I can discuss the effects of artificial intelligence – also known as AI, machine intelligence and in some cases, machine learning – on content marketing, it’s important to first understand what exactly artificial intelligence is.

So, what is AI, exactly?

Techopedia defines it as “an area of computer science that emphasizes the creation of intelligent machines that work and react like humans. Some of the activities computers with artificial intelligence are designed for include:

  • Speech recognition
  • Learning
  • Planning
  • And problem solving”

For example, such a machine would be a self-driving car: a car that doesn’t need any humans to operate it in order to safely drive itself. Or, a computer that can play chess with you and take on-the-spot decisions as needed. Or, a simple every day example and something that many can relate to – the content that Netflix suggests you watch (all based on machine learning).

YOU MAY ALSO LIKE

In other words, AI permits machines to learn from data and use that knowledge to perform human-like tasks.

And unsurprisingly, AI has also already started to make an impact on marketing, from AI content curation to chatbots – but how exactly is it (and will it be) impacting content marketing?

More Personalized Content

One of AI’s main functions is its ability to analyse huge amounts of data – and interpret them. That is an incredible feature and something that can have huge effects on content marketing and even marketing in general.

One of these effects is that it will help content marketers understand exactly who they’re targeting. Not in a creepy way, but rather in a way that many consumers expect: a Salesforce study, for example, found that 76% of consumers expect companies to understand their needs and expectations.

After all, many of today’s most popular products and services offer highly personalized experiences – like, of course, Amazon.

Content is no different than other forms of marketing when it comes to the need for personalization; consumers want a personalized experience, including only seeing content that is directly relevant to them.

So, how exactly will artificial intelligence help us create this type of content?

It’s all about the data and segmentation: AI can absorb huge amounts of data and help you segment it easily.

When it comes to audiences, AI can help you understand who exactly forms your audience, what platforms they use predominantly, what other content they read, what types of content they prefer, and so on.

Build Better Content Marketing Strategies With An AI Marketing Assistant

Geometric facade of 51 Astor Place (the IBM Watson Building) at Astor Place in Manhattan, New York CityGETTY ROYALTY FREE

One of the ways that AI is already heavily impacting content marketing is with AI marketing assistants – like IBM Watson’s Lucy.

Lucy is an incredibly powerful tool that marketers can use for research, segmentation and planning – and it’s so powerful that it can do more in a minute than an entire team of marketers can achieve in months.

So, how exactly does an AI marketing assistant like Lucy work?

To start with, Lucy can absorb and analyse literally all of the data your company owns, or that has commissioned or licensed. What’s more, once it absorbs all of this data, you can ask it any question you might have, no matter how complex, and it will find the answer for you:

  • Which regions should I first target?
  • What mix of content should I create for my audience for maximum results?
  • What are my competitors up to?
  • What are the main personality traits of my audience?

These are questions that companies need to answer in order to put together a strategy that works. But finding these answers is not exactly easy when you don’t have a tool like Lucy on your side – gathering and interpreting these vast amounts of data would be a difficult, if not almost impossible task without help.

And the possibilities of marketing assistants like Lucy don’t end here:

  • You can create clear and complex segments of your target audience so that you can create highly personalized content
  • Plan your content marketing (and other marketing) strategies by seeing how different strategies would work and what results you can expect

Systems like Lucy will have a huge impact on content marketing as they become more affordable and more popular. They will help companies better understand their audience and their data in general and what’s more, they will help marketers put together more effective strategies as well as help them understand what types of outcomes they can expect.

Will AI Take Over Content Writing Jobs Completely?

Business man of freelancer working using laptop computer in home office, Communication technology and Business conceptGETTY ROYALTY FREE

Unsurprisingly, when it comes to technology this powerful, it’s easy to get a little scared: will it take my job from me? Will all content be written by machines in a few years?

If AI can analyse this much data and take human-like decisions based on this data – and even arguably better decisions, since a powerful machine can hold so much more knowledge than any human realistically can – can’t it also write the content, maybe just as well or even better than a human can?

In the world of journalism, it’s already happened – as far back as 2015. Back then, the Associated Press published a short financial news story: “Apple tops Street IQ forecasts”. The piece could very well have been written by a real human – but if you read until the end, you would see that it was actually “generated” by Automated Insights; in other words, it was written (or generated, if you will) by a so-called “robot journalist”.

Associated Press aren’t the only ones who experimented with robot journalism (or automated journalism) either; numerous top publications use it, such as Washington Post, who published as many as 850 articles generated by their robot journalist over the course of one year.

That might sound gloomy to many, but it doesn’t necessarily mean that AI is taking over any and all content writing jobs. Rather, this type of AI technology can be used to free up your time and give you all the data you need so that you can focus on creating better content. It can be used to automate the little tasks – like Associated Press’s news story about Apple’s quarterly earnings – while humans can focus on writing the more complex and sensible content.

Conclusion: Will AI Take Over Content Marketing Jobs?

Clearly, AI will rapidly become an even bigger part of digital marketing. It already is taking over numerous aspects of our lives and even threatening some jobs, but how much will it really affect content marketing?

While no one can predict the future, the present does tell us that AI will have a real impact on content marketing – from creating incredibly detailed marketing strategies based on huge amounts of data to generating content automatically, marketers will start using AI more and more to help them every step of the way.

https://www.iphoneincanada.ca/reviews/apple-watch-series-4-review/

~ Leave a comment

Apple launches sales of its new Apple Watch Series 4 in late September. We’ve been spending some time with the latest Apple Watch since then, so here’s our quick review of this wearable (less than 1000,000 words), and whether it’s worth an upgrade from an older model.

IMG 0733

Last year, when the Series 3 launched, we didn’t see any need to upgrade from our existing 42mm Nike+ Series 2. With no major design changes in the Series 3, aside from a speed bump, coupled with the fact we rarely used any watchOS apps at all, meant it wasn’t exactly worth our time. Are we seriously going to upgrade our watches annually now, too?

But with Apple Watch Series 4, larger displays are now here, and the 44mm Nike+ in Space Grey Aluminium with Anthracite/Black Sport Band ended up being our watch of choice.

We originally opted for a 44mm Space Grey Aluminium Case with Black Sport Loop, but I ended up returning it, because I didn’t like the black Sport Loop and its nylon weave with hints of colour. The nylon took too long to dry after showers, which mean for an uncomfortable band when wet. But when dry, the band does offer a tight and snug fit, every time.

Unboxing the Nike+ Apple Watch Series 4

This year’s Apple Watch Series 4 unboxing is different than years’ past, as after you take off the plastic wrap, boxes for the watch and band are wrapped in an origami-like enclosure.

IMG 0735

This time around, the band comes in its own box instead of being attached to your Apple Watch, while the latter now sits alone in a felt sleeve, alongside the charging cable and AC adapter:

IMG 0737

The Nike+ Anthracite/Black Sport Band looks great—but it collects dust easily out of the box. Once it gets covered in oil from skin over long-term use, this will probably dissipate. It feels slightly stiffer too, compared to my Nike+ band on my Series 2.

Apple Watch Series 4 resting on top of Series 2

Apple Watch Series 2 on top of Series 4

There’s a lot of packaging with this new Apple Watch and it feels excessive, almost unnecessary. The Apple Watch when it delivers, is also packed inside a cardboard box, which uses two plastic foam pieces to securely hold the package in place. I would like to see Apple use cardboard instead of plastic foam here, since the company touts the environment all the time. You’re essentially opening three boxes to get access to your Apple Watch Series 4.

Turning on Apple Watch Series 4 for the First Time

Immediately after powering on the Apple Watch Series 4, the impact of the 30% larger display which now goes closer to the edges, is instantly noticeable and satisfying to look at. After pairing with my iPhone XS Max, I was off to the races.

Compare the 44mm Series 4 on the left below, versus the 42mm Series 2 on the right—the additional display real estate is awesome:

IMG 0739

It’s now much quicker to reply to friends when they finish a workout, punch in your passcode, launch apps and more, things that tested my patience with my Series 2. Receiving notifications is a real treat, as every alert is just so much bigger. You can see more detail in your Nest camera alerts now.

The new Digital Crown has haptic feedback during scrolling which feels really nice, and makes you wonder why it wasn’t there before. It feels smoother than the Series 2, but we’ll see how this holds up after a few months of being in the shower.

The power button design no longer has an audible ‘click’ like the Series 2, and the button itself is seamlessly blended into the side of the watch, making it invisible when looking down.

The watch’s thinner design is also noticeable too, as the watch just feels better on the wrist now.

IMG 0740

The heart rate sensor has been redesigned and includes hardware to record “an ECG similar to a single-lead electrocardiogram”, but of course this feature is not available in Canada yet, as Health Canada is apparently in talks with Apple to approve it for the market here. Users in the USA will soon get an app to unlock the feature, which will only be limited via region settings. So that means Canadians will most likely be able to try the ECG feature soon when it launches for the U.S. later this year.

Image

New Infograph Watch Face

With Apple Watch Series 4, new watch faces are available, including the new Infograph watch face, which can display up to eight complications, taking advantage of the larger display and now rounded corners.

For those who have been using Apple Watch Series 4 since day one, you probably figured out there are glaring complications being omitted for Infograph watch face, in particular the Messages complication. You can select favourite contacts, but no standalone Messages complication to see if you have unread messages or not.

Gary Ng

@gary_ng

Why does this watch face lack the Messages complication? Only favourites are available. Other new watch faces are similar

See Gary Ng’s other Tweets

Apple reportedly plans to fix this though with a coming watchOS 5.1.2 update, but seriously, this should have been there from the start. Because of this, I’ve been sticking to traditional watch faces where the Messages complication is available. Using my Apple Watch to quickly glance for unread messages is one of the biggest reasons why I’m wearing one.

Aside from the above, there is also a new Nike+ analog watch face, as seen below. They’re really nice but I rarely use them (along with Nike Run Club), as I’m a fan of the modular watch face:

Image 3

S4 Performance is Blazing

With the new S4 chip in Apple Watch Series 4, it makes for a totally overhauled watch experience, especially coming from a Series 2. Moving around watchOS is so instant and fluid, it makes me want to chuck my Series 2 running watchOS 5 into the garbage (well not really but you get what I mean). Series 4 couldn’t have come at a better time and the lightning fast system speed is probably what first-get Apple Watch adopters hoped for.

Now, using Siri with Apple Watch actually works (well, we use “works” loosely here) for setting reminders, controlling HomeKit devices and more, thanks to the new and improved microphone.

As for the improved speaker, it’s a huge improvement as it can get really loud when Siri responds to you (she can talk!) and for taking phone calls.

When connected to Wi-Fi only and not your iPhone, the Series 4 allows you to use dictation to reply to messages, whereas this was not possible with Series 2.

As for battery life, Apple says you get up to 18 hours and by the time we plugged in the Series 4 to charge at night, the percentage pretty much always showed 65-70% or so remaining.

We didn’t pretend to be a crash test dummy and try out fall detection (#saynotoconcussions), but we’re hoping we don’t need to ever test it out. We’ll trust that it just works.

Mickey is now slightly bigger than before.

Apple Watch for Workouts

The majority of my exercise is through cycling or running or playing ice hockey (okay, I’m really just eating Cool Ranch Doritos on the couch). For cycling and running, I like to record my activities using Strava (via my Garmin 520 or Garmin 920xt), which then is able to write data to the Health app and towards my exercise rings.

Apple needs to open up workouts on watchOS to allow for native saving of activities to third-party apps. Once it does, it will be much more convenient launching up Workouts on Apple Watch, instead of going through a third party app.

When I’m cycling, I’m wearing a heart rate monitor on my chest, plus I have a cadence sensor and speedometer on my bike. These ANT+ sensors aren’t compatible with Apple Watch or iPhone. So my cycling activity first saves to my Garmin GPS, which then uploads data to Strava and Apple Health.

For running, I usually wear my Garmin heart rate monitor and use my Garmin 920xt to track runs, which then follows the same process as my bike rides when I save my activities. When I’m too lazy to strap on a heart rate monitor and get my Garmin, I’ve ran with just my Apple Watch Series 2 and the Strava watchOS app (with my iPhone on me while running; a cellular Apple Watch would let me ditch my iPhone at home). During these workouts, I’ve found the Apple Watch heart rate and pace tracking is off or all over the place. I’ve yet to take this Series 4 on a run (#becausepizzaandnetflix), but once I do, I’ll update you guys (I pinky swear…sorta).

Also, the water lock feature, which locks your display for your swim workouts, now no longer uses a high-pitched buzz to eject water from the device’s speaker, but now rather a lower and less annoying humming sound.

Conclusion – Apple Watch Series 4 is the One to Buy

The moment I saw this new larger 44mm display, I knew the Apple Watch Series 4 was going to stay put and replace my ‘aging’ Series 2. The latest S4 processor makes using Apple Watch such a joy now, finally. It’s much easier to respond to notifications, apps open much faster when I actually use them, and the experience with Siri has gotten better (but Siri itself still needs work).

While Apple still needs to fix missing complications such as a standalone Messages option for new Infograph watch faces (that can be easily fixed by an update), the new Apple Watch Series 4, on a whole, is pretty great and worthy of an upgrade if you’ve been using a Series 2 or older. A Digital Crown with haptic feedback just feels so nice.

But with this ‘new’ Apple we’re seeing, starting prices for Apple Watch jumped this year. The entry 40mm Apple Watch Wi-Fi now starts at a hefty $519 CAD, which is more expensive than the 38mm Apple Watch Series 3 last year at $429 CAD. Yes, I get we have new technology such as larger displays, an ECG and a new design, so those things apparently cost extra. If you’re sporting a Series 3 Apple Watch, you probably don’t need to upgrade until Series 5 makes its debut at some point in time.