For DNA to be read, replicated or repaired, DNA molecules must open themselves. This happens when the cells use a catalytic protein to create a hydrophobic environment around the molecule. Credit: Yen Strandqvist/Chalmers University of Technology
Researchers at Chalmers University of Technology, Sweden, have disproved the prevailing theory of how DNA binds itself. It is not, as is generally believed, hydrogen bonds which bind together the two sides of the DNA structure. Instead, water is the key. The discovery opens doors for new understanding in research in medicine and life sciences. The findings are published in PNAS.
DNA is constructed of two strands consisting of sugar molecules and phosphate groups. Between these two strands are nitrogen bases, the compounds that make up genes, with hydrogen bonds between them. Until now, it was commonly thought that those hydrogen bonds held the two strands together.
But now, researchers from Chalmers University of Technology show that the secret to DNA’s helical structure may be that the molecules have a hydrophobic interior, in an environment consisting mainly of water. The environment is therefore hydrophilic, while the DNA molecules’ nitrogen bases are hydrophobic, pushing away the surrounding water. When hydrophobic units are in a hydrophilic environment, they group together to minimize their exposure to the water.
The role of the hydrogen bonds, which were previously seen as crucial to holding DNA helixes together, appear to be more to do with sorting the base pairs so that they link together in the correct sequence. The discovery is crucial for understanding DNA’s relationship with its environment.
“Cells want to protect their DNA, and not expose it to hydrophobic environments, which can sometimes contain harmful molecules,” says Bobo Feng, one of the researchers behind the study. “But at the same time, the cells’ DNA needs to open up in order to be used.”
“We believe that the cell keeps its DNA in a water solution most of the time, but as soon as a cell wants to do something with its DNA, like read, copy or repair it, it exposes the DNA to a hydrophobic environment.”
Reproduction, for example, involves the base pairs dissolving from one another and opening up. Enzymes then copy both sides of the helix to create new DNA. When it comes to repairing damaged DNA, the damaged areas are subjected to a hydrophobic environment, to be replaced. A catalytic protein creates the hydrophobic environment. This type of protein is central to all DNA repairs, meaning it could be the key to fighting many serious sicknesses.
Understanding these proteins could yield many new insights into fighting resistant bacteria, for example, or potentially curing cancer. Bacteria use a protein called RecA to repair their DNA, and the researchers believe their results could provide new insight into how this process works—potentially offering methods for stopping it and thereby killing the bacteria.
In human cells, the protein Rad51 repairs DNA and fixes mutated DNA sequences, which otherwise could lead to cancer. “To understand cancer, we need to understand how DNA repairs. To understand that, we first need to understand DNA itself,” says Bobo Feng. “So far, we have not, because we believed that hydrogen bonds were what held it together. Now, we have shown that instead it is the hydrophobic forces which lie behind it. We have also shown that DNA behaves totally differently in a hydrophobic environment. This could help us to understand DNA, and how it repairs. Nobody has previously placed DNA in a hydrophobic environment like this and studied how it behaves, so it’s not surprising that nobody has discovered this until now.”
The researchers also studied how DNA behaves in an environment that is more hydrophobic than normal, a method they were the first to experiment with. They used the hydrophobic solution polyethylene glycol, and changed the DNA’s surroundings step-by-step from the naturally hydrophilic environment to a hydrophobic one. They aimed to discover if there is a limit where DNA starts to lose its structure, when the DNA does not have a reason to bind, because the environment is no longer hydrophilic. The researchers observed that when the solution reached the borderline between hydrophilic and hydrophobic, the DNA molecules’ characteristic spiral form started to unravel.
Upon closer inspection, they observed that when the base pairs split from one another (due to external influence, or simply from random movements), holes are formed in the structure, allowing water to leak in. Because DNA wants to keep its interior dry, it presses together, with the base pairs coming together again to squeeze out the water. In a hydrophobic environment, this water is missing, so the holes stay in place.
“Hydrophobic catalysis and a potential biological role of DNA unstacking induced by environment effects” is published in Proceedings of the National Academy of Sciences (PNAS).
More information: Bobo Feng et al, Hydrophobic catalysis and a potential biological role of DNA unstacking induced by environment effects, Proceedings of the National Academy of Sciences (2019). DOI: 10.1073/pnas.1909122116
If you want the knowledge of Wikipedia on your Apple Watch, MiniWiki is the best way to go. The watch app is developed by Will Bishop, the maker of Chirp for Twitter and Nano for Reddit, so Will knows his Apple Watch apps. The first big update to MiniWiki since launching is out this week, and it adds a highly requested feature to the Wikipedia watch app.
MiniWiki landed on the Apple Watch earlier this year with core features like searching for specific topics and even finding entries based on nearby places wherever you travel. You might not read a full Wikipedia article from your Apple Watch — there’s a bookmarking feature for saving entries to iPhone for that — but you can certainly learn an interesting fact or piece of trivia in a few seconds with MiniWiki.
MiniWiki version 1.1 adds two new features to the unofficial Wikipedia app for Apple Watch.
MiniWiki already lets you search for specific articles or find articles based on location or popularity, and now MiniWiki includes a new ‘Random’ button for testing your luck and learning something totally new. Did you know Swish is a mobile payment system in Sweden? Neither did I, but now I do after one tap of the new Random button in MiniWiki.
The new update also introduces support for a new option that Will calls “by far the most requested feature” for MiniWiki: the ability to set a different language inside MiniWiki than the Apple Watch language. Just use the new “Language” picker inside the Settings section of MiniWiki to start using the app in a language independent from the Apple Watch language.
MiniWiki 1.1 also adds these changes and improvements:
If an article you try and read in your selected language is not available, you’ll be able to try it in English easily.
‘Tip’ and ‘Pro’ buttons now have some fun firework animations on the iOS app.
The image-finding algorithm is better now, so you should see images more often and see more relevant images.
Download MiniWiki for Apple Watch for free on the App Store. You can also support the unofficial Wikipedia Apple Watch app with an optional in-app purchase that unlocks offline articles, bookmarking to iPhone, and sending entries from Apple Watch to iPhone using Handoff.
Almost two years ago, Dennis Degray sent an unusual text message to his friend. “You are holding in your hand the very first text message ever sent from the neurons of one mind to the mobile device of another,” he recalls it read. “U just made history.”
Degray, 66, has been paralysed from the collarbones down since an unlucky fall over a decade ago. He was able to send the message because in 2016 he had two tiny squares of silicon with protruding metal electrodes surgically implanted in his motor cortex, the part of the brain that controls movement. These record the activity in his neurons for translation into external action. By imagining moving a joystick with his hand, he is able to move a cursor to select letters on a screen. With the power of his mind, he has also bought products on Amazon and moved a robotic arm to stack blocks.
Degray has been implanted with these devices, known as Utah arrays, because he is a participant in the BrainGate programme, a long-running multi-institution research effort in the US to develop and test novel neurotechnology aimed at restoring communication, mobility and independence in people whose minds are fine but who have lost bodily connection due to paralysis, limb loss or neurodegenerative disease.
But while the Utah array has proved that brain implants are feasible, the technology has a long way to go. Degray had open brain surgery to place his. The system is not wireless – a socket protrudes from his skull through which wires take the signal to computers for decoding by machine-learning algorithms. The tasks that can be done and how well they can be executed are limited because the system only records from a few dozen to a couple of hundred neurons out of an estimated 88bn in the brain (each electrode typically records from between one and four neurons).
A BrainGate electrode array with a dime for size comparison. Photograph: Matthew McKee/Brown University
And it is unlikely to last for ever. Scar tissue, the brain’s response to the damage caused by inserting the device, gradually builds up on the electrodes, leading to a progressive decline in signal quality. And when the research sessions – which take place twice a week for Degray in his living facility in Palo Alto, California – come to an end, it will be disconnected and Degray’s telepathic powers cease to be.
Barely a couple of dozen people have been implanted with Utah arrays worldwide. Great progress has been made, says Leigh Hochberg, a neurologist at Massachusetts general hospital and an engineering professor at Brown University who co-directs the BrainGate programme, but “a system that patients can use around the clock that reliably provides complete, rapid, intuitive brain control over a computer does not yet exist”.
Help may be at hand. An injection of Silicon Valley chutzpah has energised the field of brain-computer or brain-machine interfaces in recent years. Buoyed by BrainGate and other demonstrations, big-name entrepreneurs and companies and scrappy startups are on a quest to develop a new generation of commercial hardware that could ultimately help not only Degray and others with disabilities, but be used by all of us. While some, including Facebook, are pursuing non-invasive versions, wireless neural implant systems are also being worked on.
In July Elon Musk, best known as the CEO of the electric car company Tesla, presented details of an implantable wireless system that his company Neuralink is building. It is already being studied in monkeys, Musk revealed, and it is hoped that human trials will start before the end of 2020. To date, Neuralink has received $158m in funding, $100m of it from Musk.
While the implant being developed is still the same size as one of the Utah arrays in Degray’s brain, it has far more electrodes, meaning it can record from far more neurons. While a Utah array – of which up to four or five can be inserted – typically has 100 electrodes, Neuralink says its version will have more like 1,000. And the company thinks it is feasible to place up to 10. Very thin threads of flexible biocompatible polymer material studded with electrodes would be “sewn in” by a robot to avoid piercing microvessels, which Neuralink hopes would ameliorate scarring, thereby increasing how long the device lasted. “Our goal is to record from and stimulate spikes in neurons in a way that is orders of magnitude more than anything that has been done to date and safe and good enough that it is not like a major operation,” said Musk in his presentation, adding that the procedure would be more like laser eye surgery than brain surgery. Medical concerns drive the device’s development, according to Musk, but he also worries about the threat posed by artificial intelligence and believes this could provide a way of keeping up with it.
There are smaller rival startups too. Paradromics, like Neuralink, is focused on many more and smaller electrodes but is aiming for an even higher density of probes over the face of its neural implant. In form, their device would look closer to the Utah array – a bed of needles with metal electrodes – and there would be no robotic surgery. “We want to hit the market as soon as possible,” says founder and CEO Matt Angle adding the hope is to begin a clinical trial in the early 2020s. The company has raised about $25m in funding to date including significant amounts from the Pentagon’s research agency, Darpa, which grew interested in BCIs after it realised the sophisticated robotic limbs it was building for injured soldiers returning from overseas needed brain control.
Dennis Degray uses Utah array implants to manipulate the cursor on a computer screen. Photograph: PBS
Synchron, based in Australia and Silicon Valley, has a different approach. The company, which has received $21m in funding to date, including some from Darpa, last week revealed that the first clinical trial of its Stentrode device had begun in Australia – ahead of both Neuralink and Paradromics.
The device avoids open brain surgery and scarring because it is inserted using a stent through a vein in the back of the neck. Once in position next to the motor cortex, the stent splays out to embed 16 metal electrodes into the blood vessel’s walls from which neuronal activity can be recorded. So far in the trial one patient – paralysed with motor neurone disease – has been implanted, with four others set to follow. The device’s safety will be studied along with how well the system allows brain control of a computer for typing and texting. While it can only read the aggregate activity of a population of neurons, of which it will take in about 1,000, there is enough data to make a system useful for patients – and less nuance in the signal actually makes it more stable and robust, says founder and CEO Tom Oxley.
Meanwhile, challenges remain for Neuralink and Paradromics. Whether scarring can be mitigated by very small electrodes is yet to be seen. There is also the issue of the electrodes being dissolved and corroded by the body – a problem that gets worse the smaller they are. How long Neuralink’s new polymer probes will last is unknown.
“No one is going to be super impressed with the startup companies until they start recording their lifetimes in years. The Utah array has a lot of issues – but you do measure its lifetime in years,” says Cynthia Chestek, a neural interface researcher at the University of Michigan. Then, even if we are able to record all these extra neuron signals, could we decode them? “We have no idea how the brain works,” says Takashi Kozai, a biomedical engineer at the University of Pittsburgh who studies implantable technologies. “Trying to decode that information and actually produce something useful is a huge problem.” Chestek agrees that more understanding of how neurons compute things would be helpful, but “every algorithm out there” would suddenly just start doing better with a few hundred extra neurons.
None of the three companies sees nonmedical applications in the short term, but argue that the implant technology could gradually branch out into the general population as people start seeing how transformational it can be.
The most obvious application may be brain-controlled typing. Oxley imagines a scenario where people who have grown up texting and typing – and are wholly dependent on their fingers for that – lose functionality as they age. Frustrated that they can’t maintain their speed, they may seek other ways to preserve their technological capability. Eventually a tipping point will occur as people see BCIs working better than the human body. “If the technology becomes safe, it’s easy to use and it provides you with superior technology control, there will be people who will want to pay for that,” says Oxley.
Of uses beyond that, no one is being specific. Brain commands to smart speakers? Brain-controlled car driving? Brain-to-brain communication? Enhanced memory and cognition?
If the technology were to make it outside the medical domain, the military is where we might see it first, says Dr Hannah Maslen, deputy director of the University of Oxford’s Uehiro Centre for Practical Ethics. For example, it might provide silent communication between soldiers or allow activation of equipment by the thinking of certain commands. It is hard to see most people opting to undergo a surgical intervention for recreational or convenience uses, she adds. But at a recent neurotechnology meetup in San Francisco of about two dozen tinkerers, Jonathan Toomim argued it was a logical next step. “We already use devices – our smart phones – that offload a lot of our cognition and augment our memory. This is just bringing the bandwidth between the human brain and those to a higher level,” said the self-described neuroscientist, engineer, entrepreneur and environmentalist, who makes his own neurofeedback gear.
The public should have a clear voice in shaping how neural interface technology is used and regulated over the coming years, concluded a report this month on the topic from the UK Royal Society. One concern is data privacy, though Maslen says this should be tempered by the fact that while BCIs may be portrayed as being able to “mind read” and “decode thoughts” – stoking fears that they will unearth innermost secrets – they are recording from very small areas of the brain mostly related to movement, and require the user’s mental effort to make them work. “Ethical concerns around privacy … don’t apply in such a full way,” she says.
A sewing machine-like robot that inserts electrodes into the brain, under development by Neuralink.
Nonetheless, questions remain. Who owns the brain data and what is it being used for? And “brainjacking”, where a third party could gain control of the system and modify it in ways the brain’s owner has not consented to, is rooted in reality rather than science fiction says Maslen – pacemakers have been hacked before. Paradromics’ Matt Angle wonders to what extent data from BCIs could be used as evidence in court – for example to incriminate someone in the same way a diary or a computer might.
Further ethical issues arise around control and agency. If a brain implant doesn’t get your intention right, to what extent are you as the user of the device responsible for what is “said” or done? And how do we ensure that if a technology confers significant benefits, it is not just the rich who get it?
Society still has a few years to ponder these questions. Neuralink’s aim of getting a human clinical trial up and running by the end of next year is widely considered too ambitious, given what remains unproved. But many experts anticipate that the technology will be available for people with impairments or disabilities within five or 10 years. For nonmedical use, the timeframe is greater – perhaps 20 years. For Leigh Hochberg, the focus has to be on helping those who need it most. Says Degray of Neuralink’s device: “I would have one implanted this afternoon if I could.”
Is there an alternative to implants?
A worn, non-invasive brain computer interface which doesn’t involve brain surgery and can always be taken off may seem attractive. But the skull muffles the reading of neuronal signals. “The physics [of a non-invasive device] are just extremely challenging” says Cynthia Chestek of the University of Michigan.
Some companies are trying anyway. Facebook announced in 2017 it wanted to create a wearable device that would allow typing from the brain at 100 words per minute (as a comparison, Neuralink is striving for 40 words per minute – which is around our average typing speed – and Dennis Degray with his Utah array implant clocks about eight words per minute). This July, researchers at the University of California funded by the social network showed decoding of a small set of full, spoken words and phrases from brain activity in real time for the first time – though it was done with so-called electrocorticography electrodes laid on the surface of the brain via surgery. Meanwhile the company continues to work on how it might achieve the same thing non-invasively and is exploring measuring changing patterns in blood oxygenation – neurones use oxygen when they are active – with near-infrared light.
Also on the case is Los Angeles-based startup Kernel, founded by entrepreneur Bryan Johnson who made millions selling mobile payments company Braintree to PayPal. Kernel, into which Johnson has put $100m, started as a neural implants company but then pivoted to wearables because, Johnson says, the invasive road looked so long. Plenty of non-invasive methods exist for sensing and stimulating brain activity (indeed they form the basis of a large consumer neurotechnology industry). But none, says Johnson, is equal to being bridged into a next-generation interface. New ways are needed, and he believes Kernel has found one others have missed. “We will be ready to share more in 2020,” he says.
But assuming the technical challenges can be surmounted, social factors could still be a barrier, says Anna Wexler, who studies the ethical, legal and social implications of emerging neurotechnology at the University of Pennsylvania. Google Glass failed not because it didn’t work but because people didn’t want to wear a face computer. Will anyone trust Facebook enough to use their device if it does develop one?
Consuming junk food may have long-term effects on spatial memory. Credit: Unsplash
UNSW researchers have found links between junk food consumption and loss of spatial memory in a recent animal study.
Unhealthy eating may have negative long-term effects on spatial memory, a new study by UNSW researchers suggests.
The animal study, published this week in Scientific Reports, investigated cognitive function in rats that alternated between a ‘cafeteria diet‘ of foods high in fat and sugar (like pies, cake, biscuits and chips) and their regular, healthy diet. Over a period of 6 weeks, the rats were fed junk food in intervals of either three, five, or seven consecutive days, separated by their healthy chow diet.
The UNSW researchers found that the rats’ spatial memory recognition deteriorated in increments according to their pattern of access to junk food—the more days in a row they ate junk food, the worse their memory got.
“Anything over three days a week of eating badly impacted memory in these animals,” said Professor Margaret Morris, Head of Pharmacology from the School of Medical Sciences and senior author of the study.
The researchers tested the rats’ spatial memory by first familiarizing them with two objects. They then repositioned one of the objects and monitored the rats’ ability to recognize a change in their environment. A healthy animal, Professor Morris explained, would be more likely to explore the object that had been moved.
“We all know that a healthy diet with minimal junk foods is good for our overall health and performance, but this paper shows that it is critical for optimal brain function as well.”
Professor Morris and her team previously showed diet-related changes in rats’ hippocampuses, which she explained as the part of the brain responsible for helping us find things and navigate spaces. “This particular brain region is important in all of us,” she said. “It’s also already known to be affected in humans by poor diet.”
In addition to the reduced spatial memory recognition, the study also identified physical differences between rats who consumed the junk food on the three-day and five-day intervals.
The rats who were fed the cafeteria diet on the five-day schedule were considerably heavier, longer and had greater fat mass than those on the three-day schedule. Their metabolic profile also bore a closer resemblance to those on the seven-day schedule than those on the three-day schedule.
Lead author of the paper, Dr. Michael Kendig, sees the results as encouraging.
“What it suggests, at least over this relatively short-term study, is that cutting down [an unhealthy diet] even a little bit may have positive effects on cognitive ability,” he said.
“What we’re trying to do is to explore how much an unhealthy diet is likely to damage us,” said Professor Morris. “We want to live and enjoy life, but we do need to temper it with healthy eating most of the time—this study certainly confirms this.”
The study adds to existing research on cognitive function and unhealthy diets—but it differs from the body of evidence in important ways. Many existing studies test animals that have unrestrained access to junk food, which doesn’t resemble how junk food is consumed by humans.
“People tend to look at research where animals have had access to junk food 24/7 and might wonder how relevant those results are,” said Dr. Kendig. “That’s not really how people eat. We tend to alternate between days or weeks where we eat well and then days or weeks where we eat less well.
“I think these kinds of experiments where animals have access only some of the time is a better model—I hope this paper starts to add to a more accurate idea of what happens when we eat unhealthily part of the time, not all of the time.”
The researchers said that while the study produced important results, more research was needed before the findings could be translated to humans.
“It is notoriously difficult to do this kind of work in humans, due to ethical concerns,” Professor Morris said. “Getting accurate data on food intake is challenging, but the studies that have been carried out already do point to deficits in executive function in humans eating unhealthily for short periods—and long-term impacts are likely to be greater.”
More information: Michael D. Kendig et al. Pattern of access to cafeteria-style diet determines fat mass and degree of spatial memory impairments in rats, Scientific Reports (2019). DOI: 10.1038/s41598-019-50113-3
In the quest to improve memory, humans exercise physically and train mentally, but one fact remains: The brain may simply be built to forget.
New research on mice shows that a specific type of neuron in the brain is fundamentally involved in helping the brain forget memories during that most seductive of sleep periods, rapid eye movement, or REM, sleep.
Researchers presented evidence in the journal Science on Thursday that a type of neuron in the brain’s hypothalamus — melanin-concentrating hormone-producing neurons, or MCH neurons for short — are especially active while mice are in REM sleep. Those same neurons aren’t nearly as active during non-REM sleep.
They also learned that inhibiting the MCH neurons during REM sleep improved the mice’s memory abilities, whereas activating MCH neurons impaired the mice’s memories.
Previous research has shown that sleep rebalances synapses in the brain, shrinking these spaces between neurons to prepare our brains for learning again the next day.
Other work has shown that sleep may play an important role in clearing toxins from the brain.
With this latest study, the team sheds light on a crucial function of sleep: helping the brain forget unimportant memories while we sleep.
Akihiro Yamanaka, Ph.D., a professor at Nagoya University’s Research Institute of Environmental Medicine in Japan and the paper’s corresponding author, tells Inverse why he and his team pursued this study: They wanted to reveal the neural mechanism that regulates sleep vs. wakefulness.
Melanin-concentrating hormone (MCH)-producing neurons are somewhat active during wakefulness, inactive during NREM sleep, and highly active during REM sleep. Researchers concluded that these neurons are crucial to our brain’s ability to forget unnecessary memories while we sleep.
“Melanin-concentrating hormone (MCH)-producing neurons are located in the hypothalamus which is thought to be a center of sleep/wakefulness regulation,” he says. “We focused on MCH neurons to reveal their physiological function.”
And indeed they seem to have done just that.
To investigate the MCH neurons, the team used genetically altered mice whose MCH neurons could be activated and inhibited by either shining a light on the brain or introducing a specific chemical into the brain. These two techniques both achieved the same effect: When the MCH neurons were turned off during REM sleep, the mice scored significantly better at multiple different memory tests. Conversely, when the MCH neurons were turned on during REM sleep, the mice scored significantly worse. Altering the neurons’ activity at other times did not show significant effects.
Under normal circumstances, the MCH neurons, which are only found in the brain’s hypothalamus, are slightly active during waking hours, inactive during non-REM sleep, and highly active during REM sleep. Therefore, the researchers conclude that these neurons play a central role in the brain forgetting unnecessary memories during REM sleep.
But why do we need to forget?
“Probably, to forget is important for the selection of important memories and not important memories,” says Yamanaka. “In addition to this, forgetting saves memory resource in the brain.”
And the difference between an important and unimportant memory, he says, likely comes down to our emotional connection to these memories.
REM sleep is the phase of sleep when MCH neurons activate in the brain and help us forget the unnecessary memories from that day.
“This is an important point to reveal,” he says. “Important memories are formed with emotional input, which comes from the center of emotion, the amygdala. MCH neurons give pressure to erase all memory, [but] important memories could survive this pressure.”
In other words, a memory that has formed a connection among different brain regions is more likely to survive this nightly housecleaning process.
Yamanaka explains that while the exact function of REM sleep is still not clear, his team’s data shows that part of its importance has to do with forgetting.
“Without this process, memory ability would be decreased, for example [we] cannot retrieval memories at appropriate situations,” he says.
He’s careful to point out that this research does not open the door to manipulating people’s memories. The type of genetic alterations done to the mice that enabled scientists to switch their MCH neurons on and off are not done in humans, and likely won’t be done on humans in the foreseeable future.
What this research does do is emphasize the importance of REM sleep. So while there may not be any kind of technological brain hack that helps you hold onto memories, there is a good old-fashioned one you can employ to ensure that your brain forgets unnecessary memories and frees up resources for new ones: a good night’s sleep.
Abstract: The neural mechanisms underlying memory regulation during sleep are not yet fully understood. We found that melanin concentrating hormone–producing neurons (MCH neurons) in the hypothalamus actively contribute to forgetting in rapid eye movement (REM) sleep. Hypothalamic MCH neurons densely innervated the dorsal hippocampus. Activation or inhibition of MCH neurons impaired or improved hippocampus-dependent memory, respectively. Activation of MCH nerve terminals in vitro reduced firing of hippocampal pyramidal neurons by increasing inhibitory inputs. Wake- and REM sleep– active MCH neurons were distinct populations that were randomly distributed in the hypothalamus. REM sleep state–dependent inhibition of MCH neurons impaired hippocampus-dependent memory without affecting sleep architecture or quality. REM sleep–active MCH neurons in the hypothalamus are thus involved in active forgetting in the hippocampus.
Tesla CEO Elon Musk has confirmed that the highly-anticipated “Smart Summon” feature will be available for all Autopilot vehicles with hardware 2.0 and above. The announcement comes just nine days after Musk said that the Summon software was “almost perfect“.
After Tesletter posted V10 release notes regarding “Smart” Summon on their official Twitter, a follower asked Elon Musk whether owners operating under Hardware 2.5 would have the capability to experience a feature on Tesla’s driving-assist software that will allow a vehicle to find its owner through the Tesla app. Musk replied, clarifying that the newly released “Smart” Summon will operate normally with all vehicles running at least Hardware 2.
At the moment, select Tesla owners under the company’s Early Access Program have access to Version 10 features, including Smart Summon.
By tapping Summon > Advanced Summon and then holding Come to Me, the vehicle will begin approaching the geographical location of the phone, as long as it is within 213 feet of the car. The location can be adjusted through the map.
Autopilot was first announced in 2014. Dubbed “hardware 1”, Model S and Model X were fitted with a camera at the top of the windshield, along with a radar in the lower grille and ultrasonic sensors fitted in the front and rear bumpers for 360-degree vision around the car. Tesla did not release hardware version 2 for another two years.
Any Tesla vehicle that was manufactured after October 2016 will have the capability to run “Smart” Summon. Musk clarified that customers who were running older hardware versions 2 or 2.5 and purchased Tesla’s Full Self-Driving suite would be able to upgrade to Hardware 3 at no additional cost.
Tesla continues to set the industry standard when it comes to vehicles that are capable of operating with an award-winning driver assistance system. With the announcement that Tesla owners who are operating with older hardware will be able to experience the new “Smart” Summon feature, it ensures that cars will not be left behind.
Updated: Canadian Solar has attained what it describes as a new milestone for PV cell conversion efficiency, taking five months to break its own record in the field.
The ‘Solar Module Super League’ (SMSL) member said this week it has achieved 22.8% conversion efficiency for its p-type multi-crystalline silicon ‘P5’ cell, a gain on the 22.28% mark it claimed to have reached in April.
Where April’s 22.28% conversion efficiency milestone was accredited by Fraunhofer ISE, the new 22.8% breakthrough has been tested and certified by German centre Institute für Solarenergieforschung GmbH (ISFH).
According to Canadian Solar, the cells behind the 22.8% record were 246.66-square-centimetre silicon wafer products. The performance gains were helped along by the use of metal catalysed chemical etch (MCCE) black-silicon texturing, the PV firm explained.
“[The P5 milestone] proves that our multi-crystalline silicon technology can achieve efficiencies very close to mono while still enjoying the cost advantage of multi,“ Dr. Shawn Qu, Canadian Solar’s chair and CEO, said in a statement marking the solar cell breakthrough.
The multi-crystalline cells – featuring 157mm x 157mm wafers – incorporate selective emitters, multi-layer anti-reflection coating, “advanced” surface passivation and “optimised” grid design and metallisation, Canadian Solar explained.
Canadian Solar’s efficiency feats follow its move last month to upscale module assembly capacity by 1GW in response to strong demand, paving the way for nameplate capacity to hit 12.22GW by the end of this year.
For its part, ISFH has been the certifying party of choice for other firms producing solar milestones this year. The institution endorsed in April claims by Imec and Jolywood of 23.2% efficiency for bifacial n-PERT PV cells, accrediting Trina Solar’s 24.58% record for i-TOPCon bifacial cells in May.
However, the record results also highlight whether the ‘cast mono’ process developed by GCL-Poly and major wafer supplier to Canadian Solar – can still be categorised as multicrystalline – not least for transparency and limiting market confusion.
GCL-Poly has claimed that the cast mono process enables a comparable wafer to standard monocrystalline wafers.
Earlier this year, GCL System Integrated, highlighted that its standard multicrystalline wafer-based PERC cells from sister company GCL-Poly had average conversion efficiencies of up to 21%, in mass production. The average efficiency of GCL’s cast mono PERC cells had reached conversion efficiencies of 21.87%.
Conversion efficiencies above 22% for mono-cast PERC cells were claimed to have been achieved with a multi-busbar technique but would be in mass production in three years.
With additional reporting by Mark Osborne, founding senior news editor.
Nanoscale thermal emitters created at Rice University combine several known phenomena into a unique system that turns heat into light. The system is highly configurable to deliver light with specific properties and at the desired wavelength. (Illustration by Chloe Doiron/Rice University) Credit: Chloe Doiron/Rice University
What may be viewed as the world’s smallest incandescent lightbulb is shining in a Rice University engineering laboratory with the promise of advances in sensing, photonics and perhaps computing platforms beyond the limitations of silicon.
Gururaj Naik of Rice’s Brown School of Engineering and graduate student Chloe Doiron have assembled unconventional “selective thermal emitters”—collections of near-nanoscale materials that absorb heat and emit light.
Their research, reported in Advanced Materials, one-ups a recent technique developed by the lab that uses carbon nanotubes to channel heat from mid-infrared radiation to improve the efficiency of solar energy systems.
The new strategy combines several known phenomena into a unique configuration that also turns heat into light—but in this case, the system is highly configurable.
Basically, Naik said, the researchers made an incandescent light source by breaking down a one-element system—the glowing filament in a bulb—into two or more subunits. Mixing and matching the subunits could give the system a variety of capabilities.
“The previous paper was all about making solar cells more efficient,” said Naik, an assistant professor of electrical and computer engineering. “This time, the breakthrough is more in the science than the application. Basically, our goal was to build a nanoscale thermal light source with specific properties, like emitting at a certain wavelength, or emitting extremely bright or new thermal light states.
“Previously, people thought of a light source as just one element and tried to get the best out of it,” he said. “But we break the source into many tiny elements. We put sub-elements together in such a fashion that they interact with each other. One element may give brightness; the next element could be tuned to provide wavelength specificity. We share the burden among many small parts.
An electron microscope image shows an array of thermal light emitters created by Rice University engineers. The emitters are able to deliver highly configurable thermal light. Credit: The Naik Lab/Rice University
“The idea is to rely upon collective behavior, not just a single element,” Naik said. “Breaking the filament into many pieces gives us more degrees of freedom to design the functionality.”
The system relies on non-Hermitian physics, a quantum mechanical way to describe “open” systems that dissipate energy—in this case, heat—rather than retain it. In their experiments, Naik and Doiron combined two kinds of near-nanoscale passive oscillators that are electromagnetically coupled when heated to about 700 degrees Celsius. When the metallic oscillator emitted thermal light, it triggered the coupled silicon disk to store the light and release in the desired manner, Naik said.
The light-emitting resonator’s output, Doiron said, can be controlled by damping the lossy resonator or by controlling the level of coupling through a third element between the resonators. “Brightness and the selectivity trade off,” she said. “Semiconductors give you a high selectivity but low brightness, while metals give you very bright emission but low selectivity. Just by coupling these elements, we can get the best of both worlds.”
“The potential scientific impact is that we can do this not just with two elements, but many more,” Naik said. “The physics would not change.”
He noted that though commercial incandescent bulbs have given way to LEDs for their energy efficiency, incandescent lamps are still the only practical means to produce infrared light. “Infrared detection and sensing both rely on these sources,” Naik said. “What we’ve created is a new way to build light sources that are bright, directional and emit light in specific states and wavelengths, including infrared.”
The opportunities for sensing lie at the system’s “exceptional point,” he said.
“There’s an optical phase transition because of how we’ve coupled these two resonators,” Naik said. “Where this happens is called the exceptional point, because it’s exceptionally sensitive to any perturbation around it. That makes these devices suitable for sensors. There are sensors with microscale optics, but nothing has been shown in devices that employ nanophotonics.”
The opportunities may also be great for next-level classical computing. “The International Roadmap for Semiconductor Technology (ITRS) understands that semiconductor technology is reaching saturation and they’re thinking about what next-generation switches will replace silicon transistors,” Naik said. “ITRS has predicted that will be an optical switch, and that it will use the concept of parity-time symmetry, as we do here, because the switch has to be unidirectional. It sends light in the direction we want, and none comes back, like a diode for light instead of electricity.”
More information: Chloe F. Doiron et al, Non‐Hermitian Selective Thermal Emitters using Metal–Semiconductor Hybrid Resonators, Advanced Materials (2019). DOI: 10.1002/adma.201904154
Lisa-Marie Funk, co-first author, analysing protein crystals using a microscope prior to the visit to DESY Hamburg. Credit: Nora Eulig
Proteins are essential for every living cell and responsible for many fundamental processes. In particular, they are required as bio-catalysts in metabolism and for signaling inside the cell and between cells. Many diseases come about as a result of failures in this communication, and the origins of signaling in proteins have been a source of great scientific debate. Now, for the first time, a team of researchers at the University of Göttingen has actually observed the mobile protons that do this job in each and every living cell, thus providing new insights into the mechanisms. The results were published in Nature.
Researchers from the University of Göttingen led by Professors Kai Tittmann and Ricardo Mata found a way to grow high-quality protein crystals of a human protein. The DESY particle accelerator in Hamburg made it possible to observe protons (subatomic particles with a positive charge) moving around within the protein. This surprising “dance of the protons” showed how distant sections of the protein were able to communicate instantaneously with each other—like electricity moving down a wire.
In addition, Tittmann’s group obtained high-resolution data for several other proteins, showing in unprecedented detail the structure of a kind of hydrogen bond where two heavier atoms effectively share a proton (known as “low-barrier hydrogen bonding”). This was the second surprise: the data proved that low-barrier hydrogen bonding indeed exists in proteins resolving a decades-long controversy, and in fact plays an essential role in the process.
“The proton movements we observed closely resemble the toy known as a Newton’s cradle, in which the energy is instantly transported along a chain of suspended metal balls. In proteins, these mobile protons can immediately connect other parts of the protein,” explained Tittmann, who is also a Max Planck Fellow at the Max Planck Institute for Biophysical Chemistry in Göttingen. The process was simulated with the help of quantum chemical calculations in Professor Mata’s laboratory. These calculations provided a new model for the communication mechanism of the protons. “We have known for quite some time that protons can move in a concerted fashion, like in water for example. Now it seems that proteins have evolved in such a way that they can actually use these protons for signaling.”
The researchers believe this breakthrough can lead to a better understanding of the chemistry of life, improve the understanding of disease mechanisms and lead to new medications. This advance should enable the development of switchable proteins that can be adapted to a multitude of potential applications in medicine, biotechnology and environmentally friendly chemistry.
More information: Shaobo Dai et al. Low-barrier hydrogen bonds in enzyme cooperativity, Nature (2019). DOI: 10.1038/s41586-019-1581-9 Shaobo Dai et al. Low-barrier hydrogen bonds in enzyme cooperativity, Nature (2019). DOI: 10.1038/s41586-019-1581-9