https://singularityhub.com/2019/12/29/how-a-machine-that-can-make-anything-would-change-everything/

How a Machine That Can Make Anything Would Change Everything

566

From time to time, the Singularity Hub editorial team unearths a gem from the archives and wants to share it all over again. It’s usually a piece that was popular back then and we think is still relevant now. This is one of those articles. It was originally published December 25, 2017.

46

“Something is going to happen in the next forty years that will change things, probably more than anything else since we left the caves.” –James Burke

James Burke has a vision for the future. He believes that by the middle of this century, perhaps as early as 2042, our world will be defined by a new device: the nanofabricator.

These tiny factories will be large at first, like early computers, but soon enough you’ll be able to buy one that can fit on a desk. You’ll pour in some raw materials—perhaps water, air, dirt, and a few powders of rare elements if required—and the nanofabricator will go to work. Powered by flexible photovoltaic panels that coat your house, it will tear apart the molecules of the raw materials, manipulating them on the atomic level to create…anything you like. Food. A new laptop. A copy of Kate Bush’s debut album, The Kick Inside. Anything, providing you can give it both the raw materials and the blueprint for creation.

It sounds like science fiction—although, with the advent of 3D printers in recent years, less so than it used to. Burke, who hosted the BBC show Tomorrow’s World, which introduced bemused and excited audiences to all kinds of technologies, has a decades-long track record of technological predictions. He isn’t alone in envisioning the nanofactory as the technology that will change the world forever. Eric Drexler, thought by many to be the father of nanotechnology, wrote in the 1990s about molecular assemblers, hypothetical machines capable of manipulating matter and constructing molecules on the nano level, with scales of a billionth of a meter.

Richard Feynman, the famous inspirational physicist and bongo-playing eccentric, gave the lecture that inspired Drexler as early as 1959. Feynman’s talk, “Plenty of Room at the Bottom,” speculated about a world where moving individual atoms would be possible. While this is considered more difficult than molecular manufacturing, which seeks to manipulate slightly bigger chunks of matter, to date no one has been able to demonstrate that such machines violate the laws of physics.

In recent years, progress has been made towards this goal. It may well be that we make faster progress by mimicking the processes of biology, where individual cells, optimized by billions of years of evolution, routinely manipulate chemicals and molecules to keep us alive.

“If nanofabricators are ever built, the systems and structure of the world as we know them were built to solve a problem that will no longer exist.”

But the dream of the nanofabricator is not yet dead. What is perhaps even more astonishing than the idea of having such a device—something that could create anything you want—is the potential consequences it could have for society. Suddenly, all you need is light and raw materials. Starvation ceases to be a problem. After all, what is food? Carbon, hydrogen, nitrogen, phosphorous, sulphur. Nothing that you won’t find with some dirt, some air, and maybe a little biomass thrown in for efficiency’s sake.

Equally, there’s no need to worry about not having medicine as long as you have the recipe and a nanofabricator. After all, the same elements I listed above could just as easily make insulin, paracetamol, and presumably the superior drugs of the future, too.

What the internet did for information—allowing it to be shared, transmitted, and replicated with ease, instantaneously—the nanofabricator would do for physical objects. Energy will be in plentiful supply from the sun; your Santa Clause machine will be able to create new solar panels and batteries to harness and store this energy whenever it needs to.

Suddenly only three commodities have any value: the raw materials for the nanofabricator (many of which, depending on what you want to make, will be plentiful just from the world around you); the nanofabricators themselves (unless, of course, they can self-replicate, in which case they become just a simple ‘conversion’ away from raw materials); and, finally, the blueprints for the things you want to make.

In a world where material possessions are abundant for everyone, will anyone see any necessity in hoarding these blueprints? Far better for a few designers to tinker and create new things for the joy of it, and share them with all. What does ‘profit’ mean in a world where you can generate anything you want?

As Burke puts it, “This will destroy the current social, economic, and political system, because it will become pointless…every institution, every value system, every aspect of our lives have been governed by scarcity: the problem of distributing a finite amount of stuff. There will be no need for any of the social institutions.”

In other words, if nanofabricators are ever built, the systems and structure of the world as we know them were built to solve a problem that will no longer exist.

In some ways, speculating about such a world that’s so far removed from our own reminds me of Eliezer Yudkowsky’s warning about trying to divine what a superintelligent AI might make of the human race. We are limited to considering things in our own terms; we might think of a mouse as low on the scale of intelligence, and Einstein as the high end. But superintelligence breaks the scale; there is no sense in comparing it to anything we know, because it is different in kind. In the same way, such a world would be different in kind to the one we live in today.

We, too, will be different in kind. Liberated more than ever before from the drive for survival, the great struggle of humanity. No human attempts at measurement can comprehend what is inside a black hole, a physical singularity. Similarly, inside the veil of this technological singularity, no human attempts at prognostication can really comprehend what the future will look like. The one thing that seems certain is that human history will be forever divided in two. We may well be living in the Dark Age before this great dawn. Or it may never happen. But James Burke, just as he did over forty years ago, has faith.

https://www.wired.co.uk/article/ai-talk-animals

Artificial intelligence is helping us talk to animals (yes, really)

AI has helped us decode ancient languages, and now researchers are turning the same technique to help understand our pets


 

Marc Aspinall

Each time any of us uses a tool, such as Gmail, where there’s a powerful agent to help correct our spellings, and suggest sentence endings, there’s an AI machine in the background, steadily getting better and better at understanding language. Sentence structures are parsed, word choices understood, idioms recognised.

That exact capability could, in 2020, grant the ability to speak with other large animals. Really. Maybe even faster than brain-computer interfaces will take the stage.

Our AI-enhanced abilities to decode languages have reached a point where they could start to parse languages not spoken by anyone alive. Recently, researchers from MIT and Google applied these abilities to ancient scripts – Linear B and Ugaritic (a precursor of Hebrew) – with reasonable success (no luck so far with the older, and as-yet undeciphered Linear A).

First, word-to-word relations for a specific language are mapped, using vast databases of text. The system searches texts to see how often each word appears next to every other word. This pattern of appearances is a unique signature that defines the word in a multidimensional parameter space. Researchers estimate that languages – all languages – can be best described as having 600 independent dimensions of relationships, where each word-word relationship can be seen as a vector in this space. This vector acts as a powerful constraint on how the word can appear in any translation the machine comes up with.

These vectors obey some simple rules. For example: king – man + woman = queen. Any sentence can be described as a set of vectors that in turn form a trajectory through the word space.

These relationships persist even when a language has multiple words for related concepts: the famed near-100 words Inuits have for snow will all be in similar dimensional spaces – each time someone talks about snow, it will always be in a similar linguistic context.

Take a leap. Imagine that whale songs are communicating in a word-like structure. Then, what if the relationships that whales have for their ideas have dimensional relationships similar to those we see in human languages?

That means we should be able to map key elements of whale songs to dimensional spaces, and thus to comprehend what whales are talking about and perhaps to talk to and hear back from them. Remember: some whales have brain volumes three times larger than adult humans, larger cortical areas, and lower – but comparable – neuron counts. African elephants have three times as many neurons as humans, but in very different distributions than are seen in our own brains. It seems reasonable to assume that the other large mammals on earth, at the very least, have thinking and communicating and learning attributes we can connect with.

What are the key elements of whale songs and of elephant sounds? Phonemes? Blocks of repeated sounds? Tones? Nobody knows, yet, but at least the journey has begun. Projects such as the Earth Species Project aim to put the tools of our time – particularly artificial intelligence, and all that we have learned in using computers to understand our own languages – to the awesome task of hearing what animals have to say to each other, and to us.

There is something deeply comforting to think that AI language tools could do something so beautiful, going beyond completing our emails and putting ads in front of us, to knitting together all thinking species. That, we perhaps can all agree, is a better – and perhaps nearer-term – ideal to reach than brain-computer communications. The beauty of communicating with them will then be joined to the market ideal of talking to our pet dogs. (Cats may remain beyond reach.)

Mary Lou Jepsen is the founder and CEO of Openwater. John Ryan, her husband, is a former partner at Monitor Group

https://www.engadget.com/2019/12/28/tesla-cybertruck-travis-scott-music-video/?guccounter=1&guce_referrer=aHR0cHM6Ly9uZXdzLmdvb2dsZS5jb20v&guce_referrer_sig=AQAAACQvFkYfHKBzKZll0MLZF6dvaT37PsC0eJuMM58S32inzm0V56iDry5ZxPDegzuDc-dCbJBApeaf1jzzGtqKrDuqdA7ZBS7jdQIiNjL-ZYrtVyojw4vMCv1cyfFlS2jzbvJ04_v7_qV4CVu8FQJj6bgydgRNbZzUx1iVxcCnyUFe

Tesla’s Cybertruck found its way into a Travis Scott music video

This definitely isn’t conventional product placement.
Travis Scott/YouTube

Tesla likes to brag about racking up sales without a lick of advertising, but it’s apparently not averse to some product placement. Rapper Travis Scott has shared the video for “Gang Gang,” and the car-centric video includes extensive, conspicuous shots of Scott and crew performing around (and occasionally using) both the Cybertruck and the Cyberquad electric ATV. There’s even a Boring Company Not-A-Flamethrower thrown in for good measure — the supercars in the rest of the clip are practically window dressing in comparison.

 

It’s not clear how the EVs ended up in Scott’s video. We’ve asked Tesla for comment.

There’s a good possibility Scott or the producers have a close connection with Elon Musk, though. He’s one of the very few people to drive the Cybertruck in public, and Musk has been spotted hobnobbing with Scott and other stars as recently as Christmas Eve. Whether the video spot represents formal product placement or just a favor for a friend, it clearly represents an attempt to build buzz (not to mention more deposits) for the electric truck well before it’s available to the public.

https://www.cnx-software.com/2019/12/29/nanovision-nanoberry-miniature-evaluation-kits-released-for-arduino-and-raspberry-pi-platforms/

NanoVision & NanoBerry Miniature Computer Vision Evaluation Kits Released For Arduino & Raspberry Pi Platforms

AMS (Austria Mikro Systeme) known for their array for micro sensing solutions and most importantly the NanEye, a Miniature CMOS image sensor which is designed for applications where size is a critical factor has also launched a set for evaluation kits called the NanoVision and the NanoBerry for the development of solution based on the AMS NanEyeC miniature image sensor.

NanEyeC Camera Sensor

The NanEyeC comes in a footprint of just 1mm x 1mm surface mount, and it can produce 100kpixel resolution up to 58 frames/s. It seems the NanEyeC is based on the NanEye series, which typical (NanEye) features a 249×250 resolution with a high sensitive 3um x 3um rolling shutter pixel and capable of a high frame rate of about 43fps to 62fps. The NanEyeC sensor is based on the high-speed LVDS data interface.

CMOS Micro Camera Module

The sensor is assembled with a unique lens and cover glass, which fits in an endoscope with a diameter of <1.1mm. LVDS interfaces ensure it can drive signals over a cable length of up to 3m. The sensor can communicate over a single bi-directional serial interface.

AMS is targeting the NanEyeC to a wide range of markets. For example, it is expected the NanEyeC will easily find applications in eye tracking for augmented/virtual reality headsets, applied in presence detection and monitoring, robotics, especially those tiny robots, home security, and anything that requires stealth detection.

In facilitating the development of the NanEyeC applications, AMS has recently announced the NanoVision and NanoBerry evaluation kits, which provide a ready-made platform for the development of the sensor.

NanoVision Demo Kit

Although little information is provided at this time, the NanoVision demo kit is based on the Arduino development platform, which raises the possibility of compatibility with the Arduino ecosystem. The kit includes all necessary drivers to interface the sensor’s Single-Ended Interface Mode (SEIM) output to an Arm Cortex-M7 microcontroller.

Different image processing functions are expected to be possible like white-color balancing, color reconstruction, background subtraction, which can be used for presence detection and others. The NanoVision is targeted to engineers to help them in accelerating the development of low frame-rate applications.

NanoVision and NanoBerry evaluation kits

NanoBerry Evaluation Kit

The NanoBerry Evaluation kit is designed as a typical Raspberry Pi HAT board. It includes a NanEyeC image sensor built into an add-on board that can be attached to the Raspberry Pi. The kit consists of firmware for interfacing with the Raspberry.

The NanoBerry board is expected to be compatible with any of the Raspberry Pi Arm Cortex-A53-based processor boards, which include the Raspberry Pi 2, and 3 series but not sure of the Raspberry Pi 4 which uses the Arm Cortex-A72 processor.

Unlike the NanoVision, the NanoBerry is targeted to applications demanding higher frame rates such as object detection, object tracking, and computer vision functions provided by the OpenCV library.

The NanoVision board is available now to customers on request, while the NanoBerry kit is expected to be available to customers from Q1 2020. More information about the kits is available on the announcement page and the product page.

A free wanderer who is highly interested in technology, especially those concerned with saving and solving human problems. He could be nerdy sometimes, in that state he is digesting topics related to deep learning, machine learning, natural language processing, internet of things, smart cities, embedded systems, mobile robots, precision agriculture, and lastly machine vision.

https://www.theatlantic.com/health/archive/2019/12/sleep-cold/604111/

Your Bedroom Is Too Hot

It’s a classic situation among couples: One person says the bedroom is too cold. The other says it’s too hot. There is a bitter battle for control of the thermostat. Both people say things they regret.

One person—let’s call her Sharon—starts spending a little too much time with your best friend, Greg. You try to talk to Greg about it at the YMCA, but he just shrugs, like, What am I supposed to do? Then he says you should listen to Sharon about turning up the bedroom temperature.

Moments like this are the reason science exists: to prove other people wrong. What is the ideal temperature for a bedroom?

This question turns out to matter even beyond the simple issue of relationship-destroying tension. Sleep quality affects our health, cognitive functioning, and financial well-being. Extreme temperatures obviously disrupt sleep—recall a summer night spent sweating through sheets, or a winter night spent curled into a tight ball to preserve heat, and being noticeably bleary the next morning. More often, the influence is subtler. Many of us could probably improve the quality of our sleep by being more attentive to temperature.

Sharon is wrong, and I found studies to prove that her whims affect other people in very real ways. Anyone complaining about it being too hot in the bedroom is not just being “a whining loser.” People who sleep in hot environments have been found to have elevated levels of the stress hormone cortisol the next morning. Researchers also recently posited that patients sleep so poorly in hospital ICUs in part because the rooms are too warm.

Those who sleep in cold environments, meanwhile, tend to fare better. A study of people with a sleep disorder found that they slept longer in temperatures of 61 degrees Fahrenheit versus 75 degrees. The cold-sleepers were also more alert the next morning. The basic physiology is that your body undergoes several changes at night to ease you into sleep: Your core and brain temperatures decrease, and both blood sugar and heart rate drop. Keeping a bedroom hot essentially fights against this process. Insomnia has even been linked to a basic malfunctioning of the body’s heat-regulation cycles—meaning some cases could be a disorder of body temperature.

In light of this physiology, sleep experts unanimously suggest keeping your bedroom cooler than the standard daytime temperature of your home. There is no universally accepted temperature that is the correct one, but various medical entities have suggested ideal temperature ranges. The most common recommendation, cited by places like the Cleveland Clinic and the National Sleep Foundation, is 60 to 67 degrees Fahrenheit. Within that range, experts vary. A neurologist in Virginia told Health.com that the magic number is 65. Others have advised an upper limit of 64.

The U.S. Department of Energy recommends keeping your home at 68 degrees during the day and “lower while you’re asleep.” That guideline is based on money, not health: It was originally suggested by President Richard Nixon as a way of conserving oil during an embargo. In 1977, President Jimmy Carter went further, suggesting 65 degrees in daytime and 55 at night. He ordered that the White House thermostat be lowered accordingly, and subsequently extended the rule to all public buildings. The change was estimated to have saved around 300,000 gallons of oil daily.

Even though no one was fined under the thermostat rule, Ronald Reagan promptly undid it in 1981, citing “unnecessary regulatory burden.” No such executive thermoregulatory fiats have since been attempted. If you want to work and sleep in a sauna-like sweat box, that is your God-given right as a red-blooded American. But it should be done with the knowledge that thermostat decisions affect far more than one’s own personal sleep. The burning of fossil fuels contributes to the air pollution that kills millions of people every year, and the health effects of climate change are far-reaching.

As for individual health guidelines, human variation makes giving any specific number almost impossible—and borderline irresponsible. Different temperatures will suit different people differently. At the same time, a range like “60 to 67 degrees” can feel nebulously broad. It’s less satisfying than a single number, and it doesn’t solve the bed-partner argument. So I will say this: 60 degrees is the correct temperature for winter sleep. Anything warmer is incorrect.

 

If 60 degrees is simply intolerable, physically or existentially, the National Sleep Foundation recommends sleeping in socks or putting a hot water bottle at your feet. Or maybe wear a warm hat, which also prevents bed head. Regulating the temperature of just your feet or head is more precise than adding blankets, and more efficient than trying to regulate an entire room. (In truth, you could go much, much colder on the thermostat by simply adding more clothing and blankets. I’ve slept very well while camping in frigid weather with the right sleeping bag. I’ve also felt on the brink of death in similar circumstances with the wrong sleeping bag.)

A final caveat is the little matter of summer. Should people use vicious amounts of air-conditioning to get the temperature down? Definitely not. Summer sleeping is not as ideal, and many people do report getting worse sleep then. But it’s possible to sleep well outside that ideal temperature range. A lot can be accomplished with a strategically placed fan. It’s also definitely possible to train oneself to sleep without covers—without a sheet, even.

Anyone who shares a bed with you may not be immediately comfortable sleeping a few degrees colder. It’s not a subject to be broached aggressively. But as in most cases of habit alteration, gradual change is sometimes the key to making the impossible possible. Maybe try dropping the thermostat by one degree a week for the next four weeks. Like easing your way into a tepid swimming pool, before you know it, you may develop a new sense of normal.

If not, you can go back to burning lots of energy and spending lots of money. But if it works, the collective result of just a few degrees of change could mean significant benefit to the economy, personal and public health, and the environment—not to mention the sustainability of romantic and nonromantic bedroom-sharing arrangements.

JAMES HAMBLIN, MD, is a staff writer at The Atlantic. He hosts the video series If Our Bodies Could Talk and is the author of a book by the same title.

https://interestingengineering.com/the-brain-can-detect-touch-through-tools-confirms-new-research

The Brain Can Detect Touch through Tools, Confirms New Research

A new study explores the impressive capacity of the brain to feel contact on a foreign object.

The Brain Can Detect Touch through Tools, Confirms New Research

We have all experienced it before. You are holding some kind of tool that comes in to contact with something else and you feel that contact as if it were with your own skin.

We never think much of this process because it is so instinctive but it is an impressive feat of the brain. Now Luke Miller, a cognitive neuroscientist, and a few of his colleagues have written a new paper exploring this phenomenon.

RELATED: LAUGHTER MAY BE THE BEST MEDICINE FOR BRAIN SURGERY

A thorough study

The study saw 16 subjects tested to see where they felt touches on a one-meter-long wooden rod. The research went through 400 trials and found that participants pinpointed the correct touch with a 96% accuracy.

But that’s not all, the researchers also registered the participants’ cortical brain activity using scalp electrodes. They discovered that the cortex quickly processed where the tool was touched.

These results indicate that the neural mechanisms for detecting touch location on tools “are remarkably similar to what happens to localize touch on your own body,” said to Scientific American Alessandro Farnè, a neuroscientist at the Lyon Neuroscience Research Center in France and senior author of the study.

Taking the study further

To take the study further, the researchers repeated the experiment on a patient who lost proprioception in her right arm. They found that the subject was also able to spot where the rod was touched and produced similar brain activity as the healthy subjects.

That outcome “suggests quite convincingly that vibration conveyed through the touch, which is spared in the patient, is sufficient for the brain to locate touches on the rod,” Farnè said.

The end result of the study is the conclusion that people use the same neural processes for detecting touch on the body to locate touch on a tool. “We propose that an elementary strategy the human brain uses to sense with tools is to recruit primary somatosensory dynamics otherwise devoted to the body, “write the authors in the study.

https://www.sciencedaily.com/releases/2019/12/191227085239.htm

Injection of virus-delivered gene silencer blocks ALS degeneration, saves motor function

Date:
December 27, 2019
Source:
University of California – San Diego
Summary:
Novel spinal therapy/delivery approach prevented disease onset in neurodegenerative ALS disease model in adult mice and blocked progression in animals already showing disease symptoms.

Writing in Nature Medicine, an international team headed by researchers at University of California San Diego School of Medicine describe a new way to effectively deliver a gene-silencing vector to adult amyotrophic lateral sclerosis (ALS) mice, resulting in long-term suppression of the degenerative motor neuron disorder if treatment vector is delivered prior to disease onset, and blockage of disease progression in adult animals if treatment is initiated when symptoms have already appeared.

The findings are published in the December 23, 2019 online issue of the journal Nature Medicine. Martin Marsala, MD, professor in the Department of Anesthesiology at UC San Diego School of Medicine and a member of the Sanford Consortium for Regenerative Medicine, is senior author of the study.

ALS is a neurodegenerative disease that affects nerve cells in the brain and spinal cord. Motor neurons responsible for communicating movement are specifically harmed, with subsequent, progressive loss of muscle control affecting the ability to speak, eat, move and breathe. More than 5,000 Americans are diagnosed with ALS each year, with an estimated 30,000 persons currently living with the disease. While there are symptomatic treatments for ALS, there is currently no cure. The majority of patients succumb to the disease two to five years after diagnosis.

There are two types of ALS, sporadic and familial. Sporadic is the most common form, accounting for 90 to 95 percent of all cases. It may affect anyone. Familial ALS accounts for 5 to 10 percent of all cases in the United States, and is inherited. Previous studies show that at least 200 mutations of a gene called SOD1 are linked to ALS.

The SOD1 gene normally serves to provide instructions for making an enzyme called superoxide dismutase, which is widely used to break down superoxide radicals — toxic oxygen molecules produced as a byproduct of normal cell processes. Previous research has suggested that SOD1 gene mutations may result in ineffective removal of superoxide radicals or create other toxicities that cause motor neuron cell death, resulting in ALS.

The new approach involves injecting shRNA — an artificial RNA molecule capable of silencing or turning off a targeted gene — that is delivered to cells via a harmless adeno-associated virus. In the new research, single injections of the shRNA-carrying virus were placed at two sites in the spinal cord of adult mice expressing an ALS-causing mutation of the SOD1 gene, either just before disease onset or when the animals had begun showing symptoms.

Earlier efforts elsewhere had involved introducing the silencing vector intravenously or into cerebrospinal fluid in early symptomatic mice, but disease progression, while delayed, continued and the mice soon died. In the new study, the single subpial injection (delivered below the pia matter, the delicate innermost membrane enveloping the brain and spinal cord) markedly mitigated neurodegeneration in pre-symptomatic mice, which displayed normal neurological function with no detectable disease onset. The functional effect corresponded with near-complete protection of motor neurons and other cells, including the junctions between neurons and muscle fibers.

In adult mice already displaying ALS-like symptoms, the injection effectively blocked further disease progression and degeneration of motor neurons.

In both approaches, the affected mice lived without negative side effects for the length of the study.

“At present, this therapeutic approach provides the most potent therapy ever demonstrated in mouse models of mutated SOD1 gene-linked ALS,” said senior author Martin Marsala, MD, professor in the Department of Anesthesiology at UC San Diego School of Medicine.

“In addition, effective spinal cord delivery of AAV9 vector in adult animals suggests that the use of this new delivery method will likely be effective in treatment of other hereditary forms of ALS or other spinal neurodegenerative disorders that require spinal parenchymal delivery of therapeutic gene(s) or mutated-gene silencing machinery, such as in C9orf72 gene mutation-linked ALS or in some forms of lysosomal storage disease.”

The research team also tested the injection approach in adult pigs, whose spinal cord dimensions are similar to humans, for safety and efficacy. Using an injection device developed for use in adult humans, they found the procedure could be performed reliably and without surgical complications.

Marsala said next steps involve additional safety studies with a large animal model to determine the optimal, safe dosage of treatment vector. “While no detectable side effects related to treatment were seen in mice more than one year after treatment, the definition of safety in large animal species more similar to humans is a critical step in advancing this treatment approach toward clinical testing.”


Story Source:

Materials provided by University of California – San Diego. Original written by Scott LaFee. Note: Content may be edited for style and length.


Journal Reference:

  1. Mariana Bravo-Hernandez, Takahiro Tadokoro, Michael R. Navarro, Oleksandr Platoshyn, Yoshiomi Kobayashi, Silvia Marsala, Atsushi Miyanohara, Stefan Juhas, Jana Juhasova, Helena Skalnikova, Zoltan Tomori, Ivo Vanicky, Hana Studenovska, Vladimir Proks, PeiXi Chen, Noe Govea-Perez, Dara Ditsworth, Joseph D. Ciacci, Shang Gao, Wenlian Zhu, Eric T. Ahrens, Shawn P. Driscoll, Thomas D. Glenn, Melissa McAlonis-Downes, Sandrine Da Cruz, Samuel L. Pfaff, Brian K. Kaspar, Don W. Cleveland, Martin Marsala. Spinal subpial delivery of AAV9 enables widespread gene silencing and blocks motoneuron degeneration in ALSNature Medicine, 2019; DOI: 10.1038/s41591-019-0674-1

 

University of California – San Diego. “Injection of virus-delivered gene silencer blocks ALS degeneration, saves motor function.” ScienceDaily. ScienceDaily, 27 December 2019. <www.sciencedaily.com/releases/2019/12/191227085239.htm>.

https://phys.org/news/2019-12-insight-cells-dna.html

New insight into how dividing cells control the separation of their dna

New insight into how dividing cells control the separation of their dna
Astrin (in green) secures chromosome-microtubule attachments during cell division. Credit: Queen Mary, University of London

A study published today in the journal eLife has shown that a protein called Astrin is important for the timely and even separation of chromosomes during cell division.

During cell division our , containing a duplicated set of DNA, must be split equally between the newly created . To ensure this equal segregation of DNA, chromosomes must be correctly attached to microscopic rope-like structures, known as microtubules, which pull them apart.

However the question of how stable attachments between chromosomes and microtubules are maintained whilst the chromosomes are being forced apart, has long puzzled scientists.

Securing connections

In this study, the researchers found that the Astrin protein complex is recruited to correct attachments and works to secure them further. Professor Viji Draviam, Professor of cell and  at Queen Mary and lead author of the study, said, “We discovered that Astrin arrives at the attachment site with an enzyme called PP1 when proper attachments have been made. Together these proteins rapidly secure attachments so the attachment site is able to resist pulling forces, which are separating the DNA apart. This  only acts on correct attachments, which helps make sure  end up with the right number of chromosomes after .”

Dr. Duccio Conti, postdoctoral researcher at Queen Mary and first author of the paper, added, “Whilst we originally thought Astrin would be important to cement attachments, we were surprised to find that it actually works as a dynamic lock, ensuring attachments are not stabilised prematurely.”

The interdisciplinary research team, which included structural and evolutionary biologists, were also able to pinpoint how and where Astrin binds to the PP1 enzyme.

Important milestone

Cell division in our bodies is heavily controlled, but mistakes still happen and cells can end up with irregular numbers of chromosomes.

Previous research has suggested that cancers with too few or too many chromosomes are more aggressive and show resistance to multiple drugs. Therefore, understanding how errors in the separation of the chromosomes occur and the mechanisms that prevent mistakes could help scientists to develop treatments for the disease.

Professor Richard Pickersgill, professor of structural biology and head of the School of Biological and Chemical Sciences at Queen Mary, said: “Right now our cells are dividing to replace lost and damaged cells; it’s a wonderful process essential for life but also incredibly complex—over 100 proteins are involved in orchestrating the organisation and segregation of chromosomes alone. There is much still to discover about the detailed mechanism of chromosome segregation, but this work, which explains the role of Astrin in strengthening microtubule attachments, is an important milestone along the way.”

Dr. Emily Armstrong, research information manager at Cancer Research UK, said: “This research is vital to understand the complex processes that take place when our cells divide. Mistakes in this chain of events can lead to genetic errors that go on to cause cancer. Studies like this help us to understand what goes wrong when this disease develops, underpinning our efforts to prevent, diagnose and treat cancer more effectively.”


Explore further

Discovery of cell division ‘master controller’ may improve understanding and treatment of cancer


More information: Duccio Conti et al. Kinetochores attached to microtubule-ends are stabilised by Astrin bound PP1 to ensure proper chromosome segregation, eLife (2019). DOI: 10.7554/eLife.49325

Journal information: eLife

https://medicalxpress.com/news/2019-12-mindfulness-video-game-areas-brain.html

Mindfulness video game changes areas of the brain associated with attention

Mindfulness video game changes areas of the brain associated with attention
The game leads players through relaxing landscapes such as ancient Greek ruins and outer space. Credit: Marianne Spoon / Center for Healthy Minds

With an estimated 97 percent of adolescents playing video games in their free time, there is growing potential to design games as tools for attention-building instead of attention-busting.

A research team at the Center for Healthy Minds at the University of Wisconsin–Madison and the University of California, Irvine, designed a video  to improve mindfulness in middle schoolers and found that when young people played the game, they showed changes in areas of their brains that underlie attention.

“Most educational video games are focused on presenting declarative information: various facts about a particular subject, like biology or chemistry,” says Elena Patsenko, a research scientist at the Center for Healthy Minds and lead author on the recently published paper. “Our aim is different. We want to actually change the cognitive or emotional processes—how people think or process information they’re trying to learn.”

The game, called “Tenacity,” was designed for middle schoolers and requires players to count their breaths by tapping a touch screen to advance. It leads players through relaxing landscapes such as ancient Greek ruins and outer space.

Players tap once per breath while counting breaths for the first four breaths and then tap twice every fifth breath. Players earn more points and advance in the game by counting sequences of five breaths accurately.

This trains mindfulness, which is a state of awareness of the present moment, by encouraging players to focus on their breaths.

In the study, 95 middle school-aged youth were randomly assigned to one of two groups, either the Tenacity gameplay group or a  that played the game “Fruit Ninja,” another attention-demanding game that does not teach breath counting or aspects of mindfulness. Kids in each group were instructed to play their assigned game for 30 minutes per day for two weeks while researchers conducted brain scans with participants before and after the two-week period.

Researchers found that adolescents in the Tenacity group had changes in the connectivity between their left  and the left inferior parietal cortex in the brain, which are two areas critical for attention.

“Tenacity” is a video game developed for research purposes by the Center for Healthy Minds at the University of Wisconsin–Madison and colleagues at the University of California, Irvine. Credit: UW–MADISON

These changes in the brain were associated with improvements on an attention task in the lab and were found only in the group playing Tenacity. Kids who played Fruit Ninja showed none of these changes.

“Training attention has been criticized in the scientific community because we often use a particular task to train attention and see improvement in that task alone, which doesn’t translate into other tasks or day-to-day activities,” says Patsenko. “Here, we trained  with Tenacity (a breath-counting game) and tested them with another unrelated attention task in the lab. We found that brain changes following two weeks of gameplay were associated with improvement in the performance on that unrelated attentional task.”

The public is increasingly interested in meditation and mindfulness training. There are several smart phone apps on the market that are predominantly geared toward adult populations, including programs developed by the Center for Healthy Minds. However, video games can be a tool to engage younger people in mindfulness training.

The capacity to voluntarily control attention and minimize distraction has been linked to people’s emotional health and is foundational to learning.

“This study illustrates that changes in objective measures of brain function and behavior are achievable with relatively short amounts of practice on a novel ,” says Richard Davidson, co-author on the paper and the William James and Vilas Professor of Psychology and Psychiatry. “Video games may be a powerful medium for training attention and other positive qualities in teenagers, and even small amounts of practice induce neuroplastic changes.”

The project reflects a larger trend toward developing games for the greater good, says Constance Steinkuehler, a professor of informatics at UCI, who led creation of the game while at UW–Madison.

“Games for impact have entered the mainstream, affecting both consumers and the industry,” she says. “Good designs and solid research move not only players but also future designers as well. This work lays a great foundation for wellness interventions for kids.”

The game, originally developed for research, is not supported for public use at this time.


Explore further

A video game can change the brain, may improve empathy in middle schoolers


https://electrek.co/2019/12/28/5-things-tesla-bring-market-2020/

2019 was a big year for Tesla with a significant increase in production and several new product launches. In 2020, Tesla is bringing to market some of those products and more.

Here we look at 5 things Tesla is bringing to market in 2020:

1 – Tesla Model Y

The Model Y is likely going to become Tesla’s quickest turnaround from unveiling to production.

The electric crossover was unveiled in March 2019 and with Tesla being on track for “volume production” in mid-2020, it likely means that production is going to start in early 2020 – less than a year after it was unveiled.

At the launch earlier this year, it was unveiled to mixed reviews.

Some thought it was nothing more than a slightly bigger Model 3 with a hatch while others figure out that’s exactly what a lot of people want.

The compact SUV space has been booming and several automakers have already launched electric offerings in the space. The Model Y is going to be Tesla’s and if the success the Model 3 has been having in the mid-size sedan market is any indication, it has the potential to quickly take over.

Recently, we learn that Tesla asked suppliers to accelerate Model Y part deliveries.

The Tesla Model Y starts at $48,000 for the Long Range version, which can get up to 300 miles of range. A dual motor all-wheel-drive version is also available for a $4,000 premium.

At this time, we expect the first deliveries to happen in February-March 2020.

2 – Tesla Model S refresh and Plaid

When we first heard of the plan to refresh the Model S and Model X interior, it was in the summer of 2018 and at that time, it was planned for the next year.

Tesla ended up focusing on the Model 3 ramp-up and bringing Model Y to production.

It pushed back the project as Model S and Model X became less important vehicle programs, but we believe it is still in Tesla’s plans.

The automaker is also working on introducing its new ‘Plaid’ tri-motor powertrain and we believe Tesla might be planning to release both the performance improvements and the new interior at the same time.

We believe that Tesla is going to breathe some life into the Model S and Model X programs with a new interior and the new top performance Plaid option around the end of the Summer 2020.

Tesla Model S Model X refresh

3 – Tesla Semi

When Tesla unveiled its all-electric heavy-duty truck, the Tesla Semi, back in 2017, the automaker announced that it will be released in 2019.

The company has since delayed the all-electric truck despite having taken thousands of reservations with deposits worth between $5,000 and $20,000 each.

Earlier this year, Tesla said that it plans for electric truck production to start ‘with limited volumes’ in 2020.

The automaker has yet to confirm a production facility for the Tesla Semi. We heard rumors that Tesla plans to build the truck at Gigafactory 1 in Nevada.

That’s likely the case for the powertrain at least since all new Tesla powertrain production happens at Gigafactory 1, but there are also other rumors going around about Tesla partnering with an outside manufacturer for the production of the body of the electric truck.

CEO Elon Musk said that Tesla aims to manufacture 100,000 electric trucks per year.

Over the past two years, Tesla has been taking reservations for the electric truck and said that the production versions will have 300-mile and 500-mile range versions for $150,000 and $180,000 respectively.

However, Musk said that they found opportunities to extend that range during testing, and he said that the Tesla Semi production version will have closer to 600 miles of range.

4- New Tesla battery

Earlier this year, Tesla CEO Elon Musk said that they built Model 3 to last as long as a commercial truck, a million miles, and the battery modules should last between 300,000 miles and 500,000 miles.

The CEO claimed that Tesla has a new battery coming up next year that will last a million miles.

Furthermore, everything points to Tesla not only developing a new battery chemistry for manufacturers to build for them but also to plans for Tesla to build those batteries themselves.

Over the last year, the automaker has acquired several companies with experience building batteries or battery manufacturing equipment.

Tesla officials have already all but confirmed that it’s going to manufacture its own battery cells.

We expect Tesla to make an announcement about bringing a new battery to market early in 2020 at a planned ‘Battery and Powertrain Investor Day’ event.

5 – Tesla App Store

While Tesla hasn’t officially announced anything on that front, we expect that the automaker is bringing an App store-like platform to allow developers to release apps and games for Tesla vehicles.

Since Tesla launched the Model S in 2012, the automaker has talked on and off about releasing a software development kit (SDK) to create a full third-party app ecosystem on its giant center touchscreens.

The automaker has since made an unofficial API that enables some very basic third-party apps, but it hasn’t released an SDK.

This year, Musk said that Tesla could open a platform for apps and games as the fleet grows and the fleet is going to grow a lot in 2020.

I believe Tesla’s fleet is going to grow to over a million vehicles by the end of 2020 and it’s going to be enough to make an app store viable.

Bonus: Tesla Roadster?

Officially, the next-gen Tesla Roadster is supposed to go to market in 2020, but Musk has tempered expectations on the timing in recent comments.

I’d be surprised if the new Roadster goes into production in 2020, but I think we might see Tesla do some interesting things with an updated prototype in late 2020.

What about you? What else do you think Tesla is going to launch in 2020. Let us know in the comment section below.