It’s a bug over in Europe, not a feature, so it will not last long we guess.
The latest Tesla V3 Superchargers (250 kW units with CCS2 plugs) installed in Germany (maybe also in other European markets), allows charging of all CCS2-compatible electric cars – not only Teslas.
According to the EV channel and rental company Nextmove, V3 Superchargers have some kind of bug in its authorization sequence and start charging any EV connected.
Some sources say that the V2 Superchargers, retrofitted with the CCS2 plugs, can also charge other EVs.
It should be working only with Tesla cars (as Tesla did not yet enter into a partnership with any other brand to make Supercharging available to other EVs), and owners of those cars would be billed (aside from cars with free Supercharging).
Because of the bug, not only can other EVs use V3 Superchargers with CCS2 plugs, but they can do it for free! That’s even better than in the case of Tesla Model 3, which is required to pay. We guess that the poor IT division will have a hot weekend trying to solve the issue as soon as possible.
Nextmove has demonstrated that the V3 Superchargers can be used by various models, including Porsche Taycan, which was able to take over 125 kW. Other models were usually able to take only around 50 kW, but remember it was just a brief test.
The list of tested models includes also Volkswagen ID.3, Kia Niro EV (e-Niro), Opel Ampera-e (a retired European version of Chevrolet Bolt EV), Hyundai Kona Electric.
Well, if you don’t have a Tesla, but drive other cars, it seems that finally you can visit a Tesla Supercharging station and check out what it’s like to Supercharge.
It’s like a free trial right? Who knows, maybe one day Tesla’s network will become open to others officially, but we don’t think so. It’s too valuable an asset for the company, that attracts customers specifically to Tesla EVs.
Elon Musk’s neural technology startup Neuralink showed off a demo about the tiny brain chip the company has been developing. The chip, which Musk described as “a Fitbit in your skull with tiny wires,” contains over 1000 wires that can read or write brain activity.
The neural chip will be inserted surgically by a robot, which is designed to plant the chip and wires while avoiding damage to the brain or blood vessels. Musk says that the process takes hours, and leaves only a small scar.
Neuralink’s engineers created the robot and its technology, then teamed up with Woke Studios industrial designers to productize the robot and design the outer enclosure. In a press release, Woke Studios said that it focused on a design that would work in a clinical setting, while also looking futuristic but not scary.
Take a look.
The robot is nearly eight feet tall and can move along five axes.
This size and range of motion are necessary to guide the needle for precise insertion.
The head holds and guides the needle that inserts the chip.
It also holds cameras and sensors focused on the brain.
Woke studios based the design off of other, less invasive types of medical equipment to make the device less intimidating.
Woke studios says the robot has to be white for sterility, but “pinch-points” where the device moves are in different colors.
The body is the source of movement, and is curved rather than designed with harsh lines.
Finally the base is the source of processing power for the machine, and also provides ventilation.
This Raspberry Pi project comes to us from a creator on Reddit known as Powtothemoons. It provides a simple way to visualize your heartbeat by synchronizing the rhythm with Phillips Hue lights.
According to Powtothemoons, the process occurs fast enough to feel almost instantaneous, describing it as though the heartbeat actually triggers the light. If you’re looking for a way to monitor and observe your heartbeat, this is definitely an effective way to go about it.
Powtothemoons uses a Raspberry Pi 4 2 GB module to control the operation, and a pulse sensor from pulsesensor.com to monitor the heartbeat. The assembly is connected to a Pimoroni ADC breakout board using a Pimoroni breakout Garden HAT. This HAT makes it possible to plug breakout boards into slots to avoid soldering. If you’re interested in more cool HATs, check out our list of best Raspberry Pi HATs.
The software side of the equation is mostly just a Python script that monitors input from the pulse sensor for a specific threshold. When it determines a heartbeat likely took place, the Phillips Hue lights change color.
There is a Github repository available where Powtothemoons plans to provide more pictures and code as soon as possible. In the meantime, you can check out a demo of the project in action on Reddit.
Summary: Color emotion may be a universal phenomenon, a new study reveals. People from different parts of the world often associate the same color with the same emotions.
Source: Johannes Gutenberg Universitaet Mainz
People all over the world associate colors with emotions. In fact, people from different parts of the world often associate the same colors with the same emotions. This was the result of a detailed survey of 4,598 participants from 30 nations over six continents, carried out by an international research team. “No similar study of this scope has ever been carried out,” said Dr. Daniel Oberfeld-Twistel, member of the participating team at Johannes Gutenberg University Mainz (JGU). “It allowed us to obtain a comprehensive overview and establish that color-emotion associations are surprisingly similar around the world.”
In the current issue of Psychological Science, the scientists report that the participants were asked to fill out an online questionnaire, which involved assigning up to 20 emotions to twelve different color terms.
The participants were also asked to specify the intensity with which they associated the color term with the emotion. The researchers then calculated the national averages for the data and compared these with the worldwide average.
“This revealed a significant global consensus,” summarized Oberfeld-Twistel. “For example, throughout the world the color of red is the only color that is strongly associated with both a positive feeling — love — and a negative feeling — anger.”
Brown, on the other hand, triggers the fewest emotions globally.
However, the scientists also noted some national peculiarities. For example, the color of white is much more closely associated with sadness in China than it is in other countries, and the same applies to purple in Greece.
“This may be because in China white clothing is worn at funerals and the color dark purple is used in the Greek Orthodox Church during periods of mourning,” explained Oberfeld-Twistel. In addition to such cultural peculiarities, the climate may also play a role.
According to the findings from another of the team’s studies, yellow tends to be more closely associated with the emotion of joy in countries that see less sunshine, while the association is weaker in areas that have greater exposure to it.
According to Dr. Daniel Oberfeld-Twistel, it is currently difficult to say exactly what the causes for global similarities and differences are. “There is a range of possible influencing factors: language, culture, religion, climate, the history of human development, the human perceptual system.” Many fundamental questions about the mechanisms of color-emotion associations have yet to be clarified, he continued.
However, by using an in-depth analysis that included the use of a machine learning approach developed by Oberfeld-Twistel, a computer program that improves itself as the database grows, the scientists have already discovered that the differences between individual nations are greater the more they are geographically separated and/or the greater the differences between the languages spoken in them.
CAN BLUE-LIGHT GLASSES HELP YOU SLEEP? SCIENTISTS SEPARATE FACT FROM FICTION
Blue-filtering lenses are marketed as a solution to so many problems — but do the claims hold up?
Aitor Diago / Getty ImagesTAREQ YOUSEF12 HOURS AGO
My doctoral research investigates visual processing, but when I look at the big picture, I realize that what I’m really studying are fundamental aspects of brain anatomy, connectivity, and communication.
One specific function of the visual system that I have studied during my degree is the blue-light detecting molecule, melanopsin. In humans, melanopsin is seemingly restricted to a group of neurons in the eye, which preferentially targets a structure in the brain called the suprachiasmatic nucleus — the body’s clock.
This is where the (true) idea that blue light affects our sleep-wake cycle or circadian rhythm originates from. And also why many corrective lens producers have started cashing in on blue-light filtering glasses. The most common claims that go along with these lenses is that they will help restore our natural sleep-wake cycle.
Similar to the workings of any biological system, melanopsin’s contribution to vision is more complicated than it is made out to be.
For example, melanopsin — like other light-sensitive molecules in our eyes — can result in a neural activity outside of blue light specifically. Blue is simply where it is most sensitive. So, then, blue light does indeed affect our sleep-wake cycle, but so will other wavelengths of light, to a lesser extent.
But what is the real culprit of the effects of digital screen light on our sleep-wake cycle? Is it necessarily blue light alone or is the problem likely worsened by people commonly staying up late and using their devices?
The problem seems to be not only blue-light filtering lens sellers but the way in which we talk about findings from research.
As of yet, there is no clinical evidence that supports the benefits of using blue-light filtering lenses. For now, this is another pseudoscience market that’s taken advantage of by its consumer base — anyone who uses computers.
Expanding neuroscience literacy should be a public health goal: understanding how the brain and its partner organs — like the eye — work.
For now, keeping our eyes off screens at night and taking frequent breaks from screens is what will contribute most to our eye health and sleep hygiene.
Last weekend, I wrote an article about the fact that the cost of solar power has dropped significantly in recent years. I started off by looking at the average price of solar PV panels alone, which were 12 time more expensive in 2010 and 459 times more expensive in 1977 than they are today. (I know — going back to 1977 was unnecessary. But it was fun.) After looking at solar PV panel prices, I got into the broader cost of a rooftop solar power system, which includes the cost of installation as well as many other “soft costs” related to going solar. While diving into that matter, it was striking to see that the average cost of a rooftop solar power system in the United States is $2.19/watt whereas Tesla is now offering rooftop solar power for $1.49/watt across the country. (Both prices are the price of an installed solar power system after taking into account the US federal tax credit for solar, which is 26% of the cost of the system.)
That’s a big price difference. Going deeper by exploring average solar pricing in different states based on an EnergySage report, I thought I’d find certain regions in which $1.49/watt was more typical. But I didn’t. The state with the lowest average price of solar was Arizona, which was at $1.85/watt after accounting for the 26% US tax credit. (Note that EnergySage presented pre-credit prices in that ranking, so it shows $2.50/watt instead of $1.85/watt for Arizona.) In California, where most US solar power is installed, the average price of a solar system according to EnergySage was $2.18/watt after the tax credit ($2.94/watt before the tax credit). When you put all these watts together, a difference of 69 cents per watt is a big deal. That’s $4,140 on a 6 kilowatt system, or $6,900 on a 10 kilowatt system.
There are some things that Tesla can clearly benefit from in order to bring down its solar costs. For example, Tesla is a pretty popular brand. (Okay, it’s a scorchingly hot brand.) That means Tesla hardly has to do anything to advertise its solar power systems. Additionally, Tesla is heavily online — people buy their cars online, on their phones even — and Elon Musk has one of the most popular accounts on Twitter. He can occasionally tweet out a reminder about the solar program and — bam — sales secured. The popular @Tesla account on Twitter can do the same. Historically, a lot of rooftop solar sales have been made by going door to door. Yes, door-to-door sales is still a thing, and it’s a big thing in the home solar power industry. That used to cost SolarCity (which Tesla acquired in late 2016) a lot of money. It costs #1 US home solar installer Sunrun a lot of money. It isn’t cheap having sales people walk around neighborhood after neighborhood trying to get people to make a purchase of something that costs thousands or tens of thousands of dollars, even if it will save them money in the mid to long term. Tesla can just send out some tweets, sell solar through its existing stores, and push solar indirectly through Tesla owner referral codes.
Despite knowing this and a few other ways Tesla might be able to cut costs, I felt like I didn’t have solid enough evidence to explain Tesla’s low solar power price to the world. So, I asked Tesla CEO Elon Musk about it. He responded:
“Solar panel cost is only ~50 cents/Watt. Mounting hardware, inverter and wiring is ~25 cents/Watt. Installation is ~50 cents/Watt, depending on system size.
“The other solar companies spend heavily on salespeople, advertising and complex financing instruments. We do not.”
So, that’s that — Tesla spends approximately 75 cents a watt on the hardware and approximately 50 cents a watt on the installation cost, adding up to ~$1.25/watt. Another 76 cents a watt covers some additional soft costs while also presumably providing a small profit.
There are other operational, overhead, maintenance/repair, and customer acquisition costs, but some of those costs may simply be covered by the rest of Tesla — accounting and legal matters, for example, can just be rolled into Tesla’s broader accounting and legal work/costs. Tesla is now making so much money on the automotive side of the business that it has certain advantages from economies of scale and from its overall financial might that must help to keep certain soft costs out of the rooftop solar power equation. A pure solar company doesn’t have those advantages.
As always, no matter what the averages and superficial price statements are, I do recommend that if any of you might go solar, you should get quotes from multiple installers in your area (they’re always free) and look closely at the details. Contracts are not all the same in terms of timeframe, maintenance, or financing. Of course, if you do go solar with Tesla and you don’t have someone closer to you to get a Tesla referral code from, you are certainly free to use mine — https://ts.la/zachary63404 — in order to get a $100 discount, especially if I convinced you at some point that it’s a good time to go solar.
The price of solar power has gotten so competitive that we’re seeing solar power very high in the rankings for new power capacity in the United States. Utility-scale solar power was by far the #1 source of new power capacity in June, and it was #3 in the first half of the year, just behind wind. However, those results just cover new capacity from large solar power plants. If you assume another 3,000 to 4,000 megawatts from smaller scale solar power plants — like rooftop solar power systems — solar power would have been #1 overall in the first half of 2020. We’ll get those numbers at some point. Once we do, I’ll look closely, because there’s a strong chance solar came out on top.
We still have a long way to go with regards to solar electricity generation in order to significantly cut emissions from fossil fuels. I reported earlier today that solar power accounted for just 3.4% of US electricity generation in the first half of 2020 — that’s 3.4%, not 34%. Rooftop solar power accounted for approximately a third of that, 1.1% of US electricity generation, according to the US Energy Information Administration. Electricity from solar power is growing, but there’s still a long way to go to get to even 10% of US electricity generation.
Eyewitness Imagination: How Our Minds Change Our Memories
Eyewitness memory can be inaccurate. But can it change itself?
Posted Sep 11, 2020
In our last Forensic View, we saw that typical eyewitness errors follow an interesting pattern. The most common eyewitness errors are those of suspect appearance, which is not surprising.
However, the second most common errors are those of the imagination. This is more surprising, and a lot scarier. The frequency of imaginative errors was far beyond those of any other error type (errors of suspect race or sex, or of weapons used, or of important elements of the physical environment of a given crime). Far beyond error rates in any of these important areas, people simply made things up. And they had no idea they were doing it.
How is this possible? Well, it has directly to do with how memory works, rather than with how most people think memory works.
As we’ve discussed in previous Forensic Views, people tend to think of memory as immutable, as some kind of essentially accurate recording we make in the brain (accurate except for the bits that drop out completely, bits which we term “forgotten”). But as we’ve also discussed (Bartlett, 1932; Sharps, 2017), psychologists have known that this popular view is incorrect for most of a century. Bartlett showed that memories are not static, but that they reconfigure in three specific ways: they become shorter, they coalesce around the gist of what actually happened, and they change in the direction of personal belief. This is the really scary one: we tend to remember what we believe happened, not what actually happened.
This fact throws the importance of forensic cognitive psychology into sharp relief. What we remember is not only dependent on physical reality. It also depends on what we think happened in a given situation; and, as we will see, the resultant inaccuracies can be further exacerbated by the nature of the memory process itself.
In research in my laboratory, eyewitness performance is frequently awful. For example, under ideal viewing conditions, people were right less than half the time when they tried to identify a gun they saw earlier, and right less than a quarter of the time when they tried to identify a car (see Sharps, 2017, for review). So, given this genuinely horrible level of eyewitness performance, we began to explore possible sources of eyewitness error. We began to ask: how is eyewitness memory actually elicited in a criminal investigation?
In any investigation, witnesses are going to be asked for their stories repeatedly. Officers at initial contact will ask questions. Then will come sergeants, then detectives, then more detectives, then district attorney investigators, then an assistant district attorney, then a public defender investigator, then a public defender, then a prosecutor in court—a given witness may have to think about his or her memories of a criminal situation many times, not even counting the number of times he or she thought about these issues alone, or recounted them to spouse, parents, or friends. article continues after advertisement
But why would this matter?
We conducted an experiment (Sharps et al., 2012) in which we did the same type of repeated questioning to which the criminal justice system would subject any witness. We used an experimental scene we have used many times, in which a “suspect” appears to be aiming a firearm at a “victim.” We exposed 92 people, under ideal viewing conditions (generally much better than those of a typical eyewitness situation) to this scene, and we asked them what they saw. We did this repeatedly, as occurs in the real world; our first request for information was followed by three more requests, for the ostensible purpose of adding any additional details the witnesses might recall.
Now, recall that Bartlett showed that memories are not static recordings, but are reconfigured in the directions of gist, brevity, and personal belief. Also,please recall that in our research, errors of the imagination, rooted not in the real world but in the activity of the mind itself, were the second most common error type (see Sharps, 2017).
Our witnesses of course gave accounts of the criminal event they saw initially. But what’s important to realize here is that every time eyewitnesses give an account of a crime, their recounting of that crime is also an event, subject to the same laws of Bartlett reconfiguration.
This means that every time you tell somebody about something, your retelling, as an event, may influence future accounts. Your imagination today can influence your imagination tomorrow.
But does it?
In our experiment (Sharps et al., 2012), our first request for information resulted in a true/false ratio of 3.56; in other words, we got about three and a half true statements for every false one. Not great, obviously, but not too bad.
However, our second ratio was a lot worse, with only 1.39 correct facts for every false one. The fourth question gave us similar results to the second; people were coming in with almost as many false statements as true ones. But on question 3, things were a whole lot worse: we got more false statements than true ones! article continues after advertisement
Repeating the story resulted in more and more errors, exactly as we’d predict by seeing repetition not as a passive act, but as an active contributor to inaccuracy.
Memories are not static recordings; well and good. But it turns out that memories can actually change themselves, simply by the repeated act of being remembered! And as a result, memory itself can alter the later memories recounted by a given eyewitness; memories on which a completely erroneous conviction of an innocent person may be based.
This is yet another demonstration of the great importance of an understanding of psychology, including forensic cognitive psychology, to the effectiveness and fairness of the criminal justice system.
Bartlett, F.C. (1932). Remembering: A Study in Experimental and Social Psychology. Cambridge: Cambridge University Press
Sharps, M.J. (2017). Processing Under Pressure: Stress, Memory, and Decision-Making in Law Enforcement. Flushing, NJ: Looseleaf Law
Sharps, M.J., Herrera, M., Dunn, L., & Alcala, E. (2012). Recognition and Reconfiguration: Demand-Based Confabulation in Initial Eyewitness Memory. Journal of Investigative Psychology and Offender Profiling, 9, 149-160.
Elon Musk hosted a live demo from his company Neuralink on August 28 that showed a pig named Gertrude with a computer chip relaying live signals from her brain.
The chip is proof-of-concept for Neuralink’s stated aim of getting its technology into humans, to treat neurological conditions and, according to Musk, one day merge human consciousness with computers.
Neuroscientist Prof. Andrew Jackson told Business Insider even if the technology falls far short of Neuralink’s mission statemetn it could be hugely beneficial to the world of animal testing, which could in turn lead to medical breakthroughs.
In his quest to merge human consciousness with AI, Elon Musk could massively improve the world of animal testing.
Alongside his more well-known ventures Tesla and SpaceX, Elon Musk owns a company called Neuralink. Founded in 2017, Neuralink has been working on trying to make a computer chip that could be implanted in a person’s brain.
The near-term applications for putting these chips into people’s brains would be to study and treat neurological conditions such as Parkinson’s disease. They could theoretically even restore movement to paralyzed patients via robotic prostheses connected wirelessly to the brain chip.
But Musk isn’t satisfied talking about the near-term. During an August demonstration of Neuralink, he claimed the device would enable people to do things like “save and replay memories,” or summon their car telepathically.
The device was embedded in Gertrude’s skull, with wires fanning out into her brain with electrodes capable of detecting, recording, and theoretically even stimulating brain activity.
To sift through the solid science and Musk’s more bombastic claims, Business Insider spoke to neuroscientist Professor Andrew Jackson of the University of Newcastle, who has worked with placing neural interfaces in animals — i.e. brain chips like the ones Neuralink wants to make.
Jackson said he was impressed by the kit Neuralink showed off.
“One of the things that I think is important is they are pushing up the number of channels that you can record,” he told Business Insider.
Until recently, the best commercially available product anyone performing wireless tests on animals could get their hands on recorded about 100 channels — Neuralink’s device would up that number to 1,000.
Jackson said the fact the Neuralink device is contained within a small package that can go in the skull is also a big improvement. “That’s obviously very important as you go to humans, but I think also it could be very useful for people working with animals at the moment,” he said.
At the moment a lot of neural implants on animal test subjects involve wires poking out through the skin, and a completely wireless link covered by the skin would reduce the risk of infection, Jackson said.
“Even if the technology doesn’t do anything more than we’re able to do at the moment — in terms of number of channels or whatever — just from a welfare aspect for the animals, I think if you can do experiments with something that doesn’t involve wires coming through the skin, that’s going to improve the welfare of animals,” he said.
“In [Neuralink’s] credit they clearly paid attention to the ethics of animal experimentation,” said Jackson added. “I thought it was good that they at least acknowledged it was important for these animals to be well looked after,” he said.
For the sake of any future humans who might get a Neuralink put in their brains, the welfare of test animals like Gertrude is vital, because the tests have to be conducted over several years to make sure the device doesn’t become harmful at any stage, and to make sure it will work forever.
“Anything you put in the body starts getting covered in scar tissue. If you’re trying to listen to these small signals from brain cells close up, as your device gets encapsulated by scar tissue, it becomes harder and harder to get those signals. This process can take anything from days with some kinds of electrodes to years,” Jackson said.
“The most common age of getting a spinal cord injury is 18 […] so you’re living with a condition for five decades. So really for these things to be really useful the lifetime needs to be measured in decades and not months,” he added.
During the demonstration, Musk said Gertrude had had the Neuralink in her brain for two months.
In action, the device appeared to relay information as Gertrude used her snout to snuffle about, and when put on a treadmill it was able to accurately predict the positioning of her legs as she trotted along.
To neuroscientists like Jackson, this was nothing new. “That is something that has been shown many times before now, both for walking, the movements of the legs, and also movements of the upper limb in monkeys,” he said.
Jackson was also more skeptical of Musk’s claims that the technology could one day be used to enhance human cognition, blending it with AI.
“Not to say that that won’t happen, but I think that the underlying neuroscience is much more shaky. We understand much less about how those processes work in the brain, and just because you can predict the position of the pig’s leg when it’s walking on a treadmill, that doesn’t then automatically mean you’ll be able to read thoughts,” he said.
Musk’s outlandish claims of merging human and computer consciousness notwithstanding, Jackson is keen to see Neuralink made available to animal researchers.
“I hope that they take that approach and try to make this technology as widely available within the animal research world along alongside what they’re trying to do, which is to get this approved for human use,” he said.
Improved animal testing in turn would mean improved research in humans. “
I think this would be hugely beneficial to the field […] certainly this technology will have applications in neuroscience research and any new technology is good and will advance that research. And that research may well lead to improvements in the way we treat Parkinson’s disease [for example] even if the technology itself doesn’t ever form part of the treatment,” said Jackson.
The Raspberry Pi has sparked few competitors, especially those with similar specs and form factor. But this deal on Amazon for an Iconikal Rockchip SBC not only stands up to the Raspberry Pi 3, but does so at less than 1/3 of the original price point. Not only do you get the SBC, but the offer also includes things like an LCD panel and a 16 GB microSD card.
This board is not as powerful as the current-gen Raspberry Pi 4. However, it does hold up as a worthy competitor to the Raspberry Pi 3 boards. The SBC uses an RK3328 64-bit CPU and comes with 1 GB of LPDDR3. According to CamelCamelCamel, the price for the package has yet to exceed $7.99.
The processor can reach speeds up to 1.5 GHz and has a total of four cores. It uses a Mali-450MP2 chip for graphics. Like the Raspberry Pi, it features a 40-pin GPIO header and has a similar form factor measuring in at 85mm x 56mm.
Unlike the Pi 3, there is no WiFi support, but you can easily correct this with an adapter or use the provided Ethernet port. There are two USB 2.0 ports and a USB 3.0 port. The board can support a microSD card up to 256 GB and uses a 5V/3A power adapter. Instead of connecting via USB, the power adapter has a dedicated jack.
This package is currently sold on Amazon under the brand name Iconikal. However, reviews on Amazon have advised buyers the package arrives with labeling from Recon Sentinal. The SBC depicted in the offer looks identical to the Rock64 board offered on the Pine64 website sold at a lower price under a new name.
The listing is already sold out, but you can check back on Amazon to see when they’re in stock again.
The tiny red dots are inhibitory nerve cells within the brain’s hippocampus. The optogenetic tool, shown in green, allows researchers to measure the strength of messages to other nerve cells, using flashes of light. Credit: Matt Udakis
In years to come, personal memories of the COVID-19 pandemic are likely to be etched in our minds with precision and clarity, distinct from other memories of 2020. The process which makes this possible has eluded scientists for many decades, but research led by the University of Bristol has made a breakthrough in understanding how memories can be so distinct and long-lasting without getting muddled up.
The study, published in Nature Communications, describes a newly discovered mechanism of learning in the brain shown to stabilize memories and reduce interference between them. Its findings also provide new insight into how humans form expectations and make accurate predictions about what could happen in future.
Memories are created when the connections between the nerve cells which send and receive signals from the brain are made stronger. This process has long been associated with changes to connections that excite neighboring nerve cells in the hippocampus, a region of the brain crucial for memory formation.
These excitatory connections must be balanced with inhibitory connections, which dampen nerve cell activity, for healthy brain function. The role of changes to inhibitory connection strength had not previously been considered and the researchers found that inhibitory connections between nerve cells, known as neurons, can similarly be strengthened.
Working together with computational neuroscientists at Imperial College London, the researchers showed how this allows the stabilization of memory representations.
Their findings uncover for the first time how two different types of inhibitory connections (from parvalbumin and somatostatin expressing neurons) can also vary and increase their strength, just like excitatory connections. Moreover, computational modeling demonstrated this inhibitory learning enables the hippocampus to stabilize changes to excitatory connection strength, which prevents interfering information from disrupting memories.
First author Dr. Matt Udakis, Research Associate at the School of Physiology, Pharmacology and Neuroscience, said: “We were all really excited when we discovered these two types of inhibitory neurons could alter their connections and partake in learning.
“It provides an explanation for what we all know to be true; that memories do not disappear as soon as we encounter a new experience. These new findings will help us understand why that is.
“The computer modeling gave us important new insight into how inhibitory learning enables memories to be stable over time and not be susceptible to interference. That’s really important as it has previously been unclear how separate memories can remain precise and robust.”
The research was funded by the UKRI’s Biotechnology and Biological Sciences Research Council, which has awarded the teams further funding to develop this research and test their predictions from these findings by measuring the stability of memory representations.
Senior author Professor Jack Mellor, Professor in Neuroscience at the Centre for Synaptic Plasticity, said: “Memories form the basis of our expectations about future events and enable us to make more accurate predictions. What the brain is constantly doing is matching our expectations to reality, finding out where mismatches occur, and using this information to determine what we need to learn.
“We believe what we have discovered plays a crucial role in assessing how accurate our predictions are and therefore what is important new information. In the current climate, our ability to manage our expectations and make accurate predictions has never been more important.
“This is also a great example of how research at the interface of two different disciplines can deliver exciting science with truly new insights. Memory researchers within Bristol Neuroscience form one of the largest communities of memory-focussed research in the UK spanning a broad range of expertise and approaches. It was a great opportunity to work together and start to answer these big questions, which neuroscientists have been grappling with for decades and have wide-reaching implications.”
Reference: “Interneuron-specific plasticity at parvalbumin and somatostatin inhibitory synapses onto CA1 pyramidal neurons shapes hippocampal output” by Matt Udakis, Victor Pedrosa, Sophie E. L. Chamberlain, Claudia Clopath and Jack R. Mellor, 2 September 2020, Nature Communications. DOI: 10.1038/s41467-020-18074-8