Apart from physical health-related quality of life — which seemed to improve to a greater extent in males — the benefits of subthalamic nucleus deep brain stimulation (STN-DBS) on motor, cognitive, and mental function are similar in men and women with Parkinson’s disease, a study has found.
STN-DBS is a non-destructive surgical treatment for Parkinson’s disease that involves implanting a device to stimulate targeted regions of the brain with electrical impulses generated by a battery-operated neurostimulator.
Since its implementation, STN-DBS has become an accepted and effective therapeutic option to treat motor symptoms associated with Parkinson’s. It also is used to treat other complications caused by prolonged dopaminergic treatment in advanced forms of the disease.
“There have been discussions on the influence of sex on the effect of STN-DBS in PD. Several short-term studies have suggested that overall improvements in motor and non-motor symptoms following STN-DBS are similar between male and female PD patients, whereas the short-term results on sex differences in postoperative health-related quality of life (HRQoL) are inconsistent,” the researchers said.
In this study, a team of Korean scientists set out to investigate the influence of sex on short- and long-term effects of STN-DBS in Parkinson’s.
The prospective study analyzed the medical records of 48 men and 52 women with the disease who received STN-DBS between 2005 and 2013 at the Movement Disorder Center of Seoul National University Hospital (SNUH) and were followed for at least five years.
The patients’ motor, cognitive and mental function, as well as health-related quality of life, or HRQoL, were assessed in all participants at the start of the treatment (baseline), and at one and five years of follow-up. HRQoL was assessed using the 36-Item Short Form Health Survey (SF-36), which contains physical and mental component subscores.
With the exception of the physical component of the SF-36, no differences were found between men and women in the effects of STN-DBS on any of the clinical parameters from baseline to follow-up.
STN-DBS led to significant improvements in the physical component of the SF-36 in individuals from both sexes from baseline to one year of follow-up. However, this positive effect was more pronounced among men than among women.
In addition, the researchers found that improvements in the physical component of the SF-36 from baseline to five years of follow-up were only statistically significant in men.
“In conclusion, we found that STN-DBS led to a similar degree of short-term and long-term effects on motor function, depressive and cognitive symptoms, and functional status between male and female PD patients,” the researchers said.
“Nevertheless, the physical HRQoL appears to improve to a greater extent in men over a long-term observation,” they concluded.
The researchers said further studies are warranted “to reveal the precise mechanism underlying the sex-associated differences in postoperative HRQoL, and to design an effective strategy to improve HRQoL in women undergoing STN-DBS.”
Joana holds a BSc in Biology and a MSc in Evolutionary and Developmental Biology from Universidade de Lisboa. She is currently finishing her PhD in Biomedicine and Clinical Research at Universidade de Lisboa. Her work has been focused on the impact of non-canonical Wnt signaling in the collective behavior of endothelial cells — cells that made up the lining of blood vessels — found in the umbilical cord of newborns.
Ana holds a PhD in Immunology from the University of Lisbon and worked as a postdoctoral researcher at Instituto de Medicina Molecular (iMM) in Lisbon, Portugal. She graduated with a BSc in Genetics from the University of Newcastle and received a Masters in Biomolecular Archaeology from the University of Manchester, England. After leaving the lab to pursue a career in Science Communication, she served as the Director of Science Communication at iMM.
You are an ever-changing, unique collection of perceptions, memories and expectations, of which you are constantly aware. The sense that you know your own mind is your consciousness at work. Yet despite thousands of years of inquiry by philosophers and scientists, the origins of consciousness have never been truly understood. How does a physical thing, the brain, generate a subjective experience, such as delight in the beauty of autumn?
We have some clues: disease, stroke or injury can damage it; we fleetingly surrender it during sleep or while anaesthetised. But the exact neurological or physiological structures that conspire to produce consciousness remain elusive. Now, a $20m effort will pit competing theories against each other. The project, by the Templeton World Charity Foundation, was announced last week at the Society for Neuroscience meeting in Illinois and reported in the journal Science.
Appropriately for a head-to-head contest, the first round will pit the front of the brain against the back. The Global Neuronal Workspace theory, led by Professor Stanislas Dehaene at the Collège de France, suggests a starring role for the prefrontal cortex, often called the brain’s “CEO” because of its importance in planning and problem solving. The theory contends that competing sensory stimuli jostle for neural attention, but only those that are prioritised cut through to trigger other brain processes, such as working memory and decision-making.
It is this “global broadcasting” across the brain that we experience as conscious thought. Or does the engine of consciousness purr at the back of the brain? The Integrated Information Theory, pioneered by Professor Giulio Tononi from the University of Wisconsin, argues that the more interconnected a system’s parts are, the more likely it is to be conscious. He predicts that intricately connected cells near the back of the brain should make a major contribution. A logical consequence of Prof Tononi’s mathematical idea, in which interconnectedness and information exchange are pivotal, is that non-biological matter could also be conscious.
This radical view that inanimate matter could have an inner life, known as panpsychism, is surprisingly well-regarded among philosophers. Our brains are, after all, composed of atoms, just like all the other stuff in the universe. For others, though, this renders the IIT theory self-evidently absurd. The architects of the theories will play no part in collecting or analysing data; they will simply make predictions against which their ideas will stand or fall. Laboratories around the world will scan the brains of 500 volunteers while they carry out tasks related to consciousness, such as image recognition.
Electroencephalography, or EEG, which uses electrodes to measure electrical activity in the brain, will also feature. The collective effort will be managed by Lucia Melloni at the Max Planck Institute for Empirical Aesthetics in Frankfurt. The play-offs between as many as 11 different theories will help scientists “get closer to understanding consciousness [by] increasing confidence in the theory that survives adversarial collaboration”.
First results are expected in about three years. Human consciousness is a remarkable phenomenon: the means by which we comprehend the world, but seemingly impervious to scientific inquiry. A deeper understanding may allow us, for instance, to more accurately infer levels of consciousness in locked-in patients and non-human animals. Then again, some philosophers and scientists think human consciousness itself is an illusion, a charade in the cranium destined to keep us fooled.
Google today launched Chrome 78 for Windows, Mac, Linux, Android, and iOS. The release includes the CSS Properties and Values API, Native File System API, new Origin Trials, and dark mode improvements on Android and iOS. You can update to the latest version now using Chrome’s built-in updater or download it directly from google.com/chrome.
With over 1 billion users, Chrome is both a browser and a major platform that web developers must consider. In fact, with Chrome’s regular additions and changes, developers often have to stay on top of everything available — as well as what has been deprecated or removed. Chrome 78, for example, removes the XSS Auditor due to privacy concerns.
Windows, Mac, and Linux
Chrome 78 implements the CSS Properties and Values API to let developers register variables as full custom properties. That way, you can ensure they’re always a specific type, set a default value, or even animate them. The image below is a transition created with a CSS custom property. This transition is impossible to achieve without the new API, and it’s type safe.
The new Native File System API lets developers build web apps that interact with files on the user’s local device. That means IDEs, photo and video editors, text editors, and so on. After a user grants access, the API allows web apps to read or save changes directly to files and folders by invoking the platform’s own open and save dialog boxes.
Chrome 77, released in September, introduced Origin Trials that let you try new features and provide feedback on usability, practicality, and effectiveness to the web standards community. Chrome 78 adds a few more, including Signed Exchanges and SMS Receiver API. The former allow a distributor to provide content signed by a publisher. The latter allows websites to access SMS messages that are delivered to the user’s phone.
Chrome 78 also includes a few features that are rolling out gradually. For example, Chrome users will soon be able to highlight and right-click a phone number link in Chrome and forward the call to their Android device. Some users might also see an option to share their clipboard content between their computers and Android devices. Clipboard sharing requires Chrome signed in on both devices with the same account, and Chrome Sync enabled. Google says that the text is end-to-end encrypted and the company can’t see the contents.
Chrome is also getting Google Drive integration. From Chrome’s address bar, you will be able to search for Google Drive files that you have access to. Again, if you don’t see any of these in Chrome 78, don’t fret. They are rolling out gradually.
Android and iOS
Chrome 78 for Android is rolling out slowly on Google Play. The changelog is just one bullet point: “Dark theme for Chrome menus, settings, and surfaces. Find it in Settings > Themes.”
Chrome 78 for iOS is rolling out on Apple’s App Store. It includes three improvements:
The ability to switch Chrome to dark mode if your device has been upgraded to iOS 13.
Bookmarks, History, Recent Tabs, and Reading List are now presented as cards on iOS 13.
The ability to add a new credit card directly in Chrome from the settings page.
Clearly Google focused on dark mode for this mobile release.
Security fixes
Chrome 78 implements 37 security fixes. The following were found by external researchers:
[$20000][1001503] High CVE-2019-13699: Use-after-free in media. Reported by Man Yue Mo of Semmle Security Research Team on 2019-09-06
[$15000][998431] High CVE-2019-13700: Buffer overrun in Blink. Reported by Man Yue Mo of Semmle Security Research Team on 2019-08-28
[$1000][998284] High CVE-2019-13701: URL spoof in navigation. Reported by David Erceg on 2019-08-27
[$5000][991125] Medium CVE-2019-13702: Privilege elevation in Installer. Reported by Phillip Langlois (phillip.langlois@nccgroup.com) and Edward Torkington (edward.torkington@nccgroup.com), NCC Group on 2019-08-06
[$3000][992838] Medium CVE-2019-13703: URL bar spoofing. Reported by Khalil Zhani on 2019-08-12
[$3000][1001283] Medium CVE-2019-13704: CSP bypass. Reported by Jun Kokatsu, Microsoft Browser Vulnerability Research on 2019-09-05
[$2000][989078] Medium CVE-2019-13705: Extension permission bypass. Reported by Luan Herrera (@lbherrera_) on 2019-07-30
[$2000][1001159] Medium CVE-2019-13706: Out-of-bounds read in PDFium. Reported by pdknsk on 2019-09-05
[$1000][859349] Medium CVE-2019-13707: File storage disclosure. Reported by Andrea Palazzo on 2018-07-01
[$1000][931894] Medium CVE-2019-13708: HTTP authentication spoof. Reported by Khalil Zhani on 2019-02-13
[$1000][1005218] Medium CVE-2019-13709: File download protection bypass. Reported by Zhong Zhaochen of andsecurity.cn on 2019-09-18
[$500][756825] Medium CVE-2019-13710: File download protection bypass. Reported by bernardo.mrod on 2017-08-18
[$500][986063] Medium CVE-2019-13711: Cross-context information leak. Reported by David Erceg on 2019-07-20
[$500][1004341] Medium CVE-2019-15903: Buffer overflow in expat. Reported by Sebastian Pipping on 2019-09-16
[$N/A][993288] Medium CVE-2019-13713: Cross-origin data leak. Reported by David Erceg on 2019-08-13
[$2000][982812] Low CVE-2019-13714: CSS injection. Reported by Jun Kokatsu, Microsoft Browser Vulnerability Research on 2019-07-10
[$500][760855] Low CVE-2019-13715: Address bar spoofing. Reported by xisigr of Tencent’s Xuanwu Lab on 2017-08-31
[$500][1005948] Low CVE-2019-13716: Service worker state error. Reported by Barron Hagerman on 2019-09-19
[$N/A][839239] Low CVE-2019-13717: Notification obscured. Reported by xisigr of Tencent’s Xuanwu Lab on 2018-05-03
[$N/A][866162] Low CVE-2019-13718: IDN spoof. Reported by Khalil Zhani on 2018-07-20
[$N/A][927150] Low CVE-2019-13719: Notification obscured. Reported by Khalil Zhani on 2019-01-31
[1016016] Various fixes from internal audits, fuzzing and other initiatives
Google thus spent at least $58,500 in bug bounties for this release. As always, the security fixes alone should be enough incentive for you to upgrade.
Developer features
Chrome 78 also has an updated V8 JavaScript engine. Version 7.8 includes script streaming on preload, faster object desctructuring, lazy source positions, faster RegExp match failures, WebAssembly C/C++ API, and improved WebAssembly startup time. Check out the full changelog for more information.
Extend Byte-for-Byte Update Check to all Service Worker importScripts() Resources: Byte-for-byte checks are now available for service worker scripts imported by importScripts(). Currently, service workers update only when the service worker main script has changed. In addition to not conforming to the latest spec, this forces developers to build workarounds such as adding hashes to the imported script’s urls.
Faster Web Sockets:
Chrome 78 improves the download speed of ArrayBuffer objects when used with WebSocket objects on desktop. Results depend on network speed and hardware so your results may be vary. Google has seen download speeds that are 4.1 times faster on Windows, 7.8 times faster on macOS, and 7.5 times faster on Linux.
More restrictive hasEnrolledInstrument() for Autofill Instruments: Improves the authorization of transactions by requiring unexpired cards and a billing address. This improves the quality of autofill data and increases the chances that PaymentRequest.hasEnrolledInstrument() returns true. This improves the user experience on transactions that use autofill data.
PaymentResponse.prototype.retry(): In cases where there is something wrong with the payment response’s data (for example, the shipping address is a PO box), the retry() method of a PaymentResponse instance now allows you to ask a user to retry a payment.
Percentage Opacity: Adds support for percentage values to the opacity properties, specifically, opacity, stop-opacity, fill-opacity, stroke-opacity, and shape-image-threshold. For example, opacity: 50% is equivalent to opacity: 0.5. This brings consistency and spec compliance. The rgba() function already accepts percentage alpha value, for example rgba(0, 255, 0, 50%).
Redact Address in PaymentRequest.onshippingaddresschange Event: Removes fine-grained information from the shipping address before exposing it to a merchant website in the ShippingAddressChange event. PaymentRequest.onshippingaddresschange is used to communicate the shipping address a user has selected to the merchant so they can make adjustments to the payment amounts such as shipping cost and tax. At this point, the user has not fully committed to the transaction, so the principle should be to return as little information as possible to the merchant. The redaction removes recipient, organization, addressLine and phoneNumber from the shipping address because these are not typically needed for shipping cost and tax computation.
Seeking: Adds a media session action handler for the seekto action. An action handler is an event tied specifically to a common media function such as pause or play. The seekto action handler is called when the site should move the playback time to a specific time.
User Timing L3: Extends the existing User Timing API to enable two new use cases. Developers can pass custom timestamps to performance.measure() and performance.mark(), so as to conduct measurement across arbitrary timestamps. Developers can report arbitrary metadata with performance.mark() and performance.measure(), which provides rich data to analytics via a standardized API.
Tesla’s Full Self-Driving initiatives recently received a stern dismissal from Zoox co-founder and CTO Jesse Levinson, who stated during a recent conference that the Elon Musk-led electric car maker has “no chance” to fully develop autonomous driving technology in 2020. The comments come amidst Tesla’s efforts to roll out functions of its Full Self-Driving suite, which Elon Musk expects will be “feature complete” in the near future.
Zoox is a self-driving car startup that is aimed at developing autonomous vehicles that are specifically designed for ride-hailing. This makes the company notably different compared to Tesla, which is planning a ride-hailing service while producing electric cars for individual ownership. Zoox’s approach to achieve full self-driving is also different from the Silicon Valley-based carmaker, with the company using a unique LiDAR setup that involves placing four individual units on each corner of its full self-driving vehicle.
While speaking at Business Insider‘s IGNITION: Transportation event in San Francisco Tuesday, Levinson stated that the existing technology to truly achieve autonomous driving simply does not exist today. The components will eventually be ready, according to the CTO, but the necessary parts for a full self-driving setup are still being developed. Thus, when asked if he believes Tesla will have a chance to achieve Elon Musk’s goal of attaining autonomous driving in 2020, Levinson had a simple answer: “No.”
Explaining his response, Levinson stated that Tesla’s electric cars don’t have enough sensors or computers to achieve full self-driving. While the Zoox co-founder maintained that he believes Tesla’s vehicles are “great” and that the company’s Autopilot system is second to none on the freeway, he nevertheless thinks that autonomous driving is still far away for the electric car maker.
“They don’t have enough sensors or computers to do that given any remotely known technology that exists that humans have ever created. And by the way they’re great cars, the Tesla Autopilot system on the freeway is I think the best out there … I think if Musk focused on that aspect it would be better received,” he said.
In a way, it appears that Levinson’s statements may be coming from the fact that he and Tesla CEO Elon Musk are using two very different approaches for autonomous driving, as well as a lack of updates regarding the Silicon Valley-based carmaker’s autonomous driving initiatives. While Zoox has innovated by using conventional sensors for its vehicles, for example, Tesla has gone ahead and pursued FSD with just a custom computer, a neural network, and a suite of cameras, radar, and ultrasonic sensors.
Apart from this, Elon Musk has also been very dismissive of LiDAR, stating that the use of the component is a “fool’s errand” and that any company using the technology for autonomous driving is “doomed.” Tesla’s Hardware 3 computer, unveiled earlier this year and now being retrofitted to the first batches of vehicles, was also created and designed to have enough computing power to facilitate the implementation of fully-autonomous driving systems.
Next-generation solar cells that mimic photosynthesis with biological material may give new meaning to the term ‘green technology.’ Adding the protein bacteriorhodopsin (bR) to perovskite solar cells boosted the efficiency of the devices in a series of laboratory tests, according to an international team of researchers.
Next-generation solar cells that mimic photosynthesis with biological material may give new meaning to the term “green technology.” Adding the protein bacteriorhodopsin (bR) to perovskite solar cells boosted the efficiency of the devices in a series of laboratory tests, according to an international team of researchers.
“These findings open the door for the development of a cheaper, more environmentally friendly bioperovskite solar cell technology,” said Shashank Priya, associate vice president for research and professor of materials science at Penn State. “In the future, we may essentially replace some expensive chemicals inside solar cells with relatively cheaper natural materials.”
Perovskite solar cells, named for their unique crystal structures that excel at absorbing visible light, are an area of intense research because they offer a more efficient and less expensive alternative to traditional silicon-based solar technology.
The most efficient perovskite solar cells can convert 22 to 23 percent of sunlight to electricity. The researchers found that adding the bR protein to perovskite solar cells improved the devices’ efficiency from 14.5 to 17 percent. They reported their findings in the American Chemical Society journal ACS Applied Materials and Interfaces.
The research represents the first time scientists have shown that biological materials added to perovskite solar cells can provide a high efficiency. Future research could result in even more efficient bioperovskite materials, the researchers said.
“Previous studies have achieved 8 or 9 percent efficiency by mixing certain proteins inside solar cell structures,” said Priya, a co-lead author of the study. “But nothing has come close to 17 percent. These findings are very significant.”
Commercial solar arrays consist of hundreds or thousands of individual solar cells, so even small improvements in efficiency can lead to real savings, according to the researchers.
Mimicking nature
Drawing on nature, the researchers sought to further improve the performance of perovskite solar cells through Förster Resonance Energy Transfer (FRET), a mechanism for energy transfer between a pair of photosensitive molecules.
“The FRET mechanism has been around for a long time,” said Renugopalakrishnan Venkatesan, professor at Northeastern University and Boston Children’s Hospital, Harvard University, and co-lead author on the study. “It seems to be the basis of photosynthesis and can be found in technologies like the wireless transfer of energy, and even in the animal world as a mechanism for communication. We are using this mechanism to try to create a world of bio-inspired systems that have the potential to surpass either inorganic or organic molecules.”
The bR proteins and perovskite materials have similar electrical properties, or band gaps. By aligning these gaps, the scientists hypothesized they could achieve a better performance in perovskite solar cells through the FRET mechanism.
“Solar cells work by absorbing light energy, or photon molecules and creating electron-hole pairs,” said Subhabrata Das, who participated in the research while a doctoral student at Columbia University. “By sending the electrons and holes in opposite directions, solar cells generate an electrical current that’s turned into electricity.”
However, a certain percent of electron-hole pairs recombine, reducing the amount of current produced. Mixing the bR protein into perovskite solar cells helped electron-hole pairs better move through the devices, reducing recombination losses and boosting efficiency, the scientists said.
The findings could potentially have larger consequences, leading to the design of other hybrid devices in which artificial and biological materials work together, according to the researchers.
Story Source:
Materials provided by Penn State. Note: Content may be edited for style and length.
Neil Vorano takes a trip with a group of Tesla drivers from Markham Ont. to Kincardine, Ont.
NEIL VORANO
When’s the last time you heard the phrase, “I’m not a car guy, but I love this car”? If you have recently, chances are you were talking with a Tesla owner. People who plunked down money for these innovative electric cars are absolute fanatics about them, giving Tesla a genuine cult-like status.
We Are ‘Perilously Close’ to Creating Sentient Mini-Brains in a Dish, Experts Warn
PETER DOCKRILL
22 OCT 2019
The scientific community is in danger of overstepping (or may have already breached) its ethical responsibilities in a rush to study and understand the mysteries of the brain via experimentation with artificially grown substitutes, researchers warn.
Mini-brains, also known as organoids, have in recent years become a hugely important resource in neuroscience and related fields.
But while these lab-grown analogues grown from stem cells aren’t technically considered human or animal organs, they’re becoming functionally close enough to warrant serious ethical concerns – if not an outright ban on their use, according to some neuroscientists.
In a presentation this week at the world’s largest gathering of neuroscientists, a team led by researchers from the Green Neuroscience Laboratory in San Diego made the case for why there is an “urgent need” for scientists to develop a framework of criteria that stipulates what ‘sentience’ is, so that future research using mini-brains and stem cell cultures can be bound by a developed set of ethical rules.
Mini brains at 10 months. (Muotri Lab/UCTV)
“The compositional and causal features in these cultures are – by design – often very similar to naturally occurring neural substrates,” the team explains in their abstract.
“Recent developments in organoid research also entail that the anatomical substrates are now approaching local network organisation and larger structures found in sentient animals.”
There’s a lot of evidence to support this. In recent years, scientists have promoted mini-brains as an economical and practical alternative to animal testing, and advancements in nurturing stem cells are helping scientists figure out how to mimic the complex neural subtypes of human brain tissue.
In March, scientists grew a mini-brain – said to be roughly analogous in complexity to a human foetal brain at 12 to 13 weeks – and, in the context of their model experiment, it spontaneously connected itself to a nearby spinal cord and muscle tissue.
A few months later, in a separate experiment, researchers detected electrical activity exhibited by organoids that looked startlingly similar to human brain waves.
While the scientific teams behind these incredible accomplishments are usually quick to observe that the organoids we’re capable of developing today are far removed from showing the neural sophistication of human and animal brains, Ohayon and his team’s computational models suggest we’re getting awfully close to growing sentient brains in a dish.
“Current organoid research is perilously close to crossing this ethical Rubicon and may have already done so,” the researchers explain.
“Despite the field’s perception that the complexity and diversity of cellular elements in vivo remains unmatched by today’s organoids, current cultures are already isomorphic to sentient brain structure and activity in critical domains and so may be capable of supporting sentient activity and behaviour.”
The Green Neuroscience Laboratory is run by Elan Ohayon and Ann Lam, two neuroscientists who have outlined a “Roadmap to a New Neuroscience”: a set of core ethical principles for their research, designed to exclude “toxic methodologies”, animal experimentation, and methods that otherwise infringe an individual’s rights, privacy, and autonomy.
From their viewpoint, the state of sophistication in current mini-brain research means we should be affording the same kinds of protections to primitive organoids that might be just complex enough to have thoughts and sensations.
“If there’s even a possibility of the organoid being sentient, we could be crossing that line,” Ohayon told The Guardian.
“We don’t want people doing research where there is potential for something to suffer.”
The Green team aren’t the only scientists with such qualms. In a study published this month, neuroscientists from the University of Pennsylvania argued why the field needs guidelines that don’t currently exist – especially in the context of experiments where lab-grown organoids are transplanted into animal host bodies.
“The field is developing quickly, and as we continue down this path, researchers need to contribute to the creation of ethical guidelines grounded in scientific principles that define how to approach their use before and after transplantation in animals,” says neurosurgeon Isaac Chen.
“While today’s brain organoids and brain organoid hosts do not come close to reaching any level of self-awareness, there is wisdom in understanding the relevant ethical considerations in order to avoid potential pitfalls that may arise as this technology advances.”
The research was presented at Neuroscience 2019, the annual meeting of the Society for Neuroscience, held in Chicago this week.
A Google quantum computer has far outpaced ordinary computing technology, an achievement called quantum supremacy that’s an important milestone for a revolutionary way of processing data. Google disclosed the results in the journal Nature on Wednesday. The achievement came after more than a decade of work at Google, including the use of its own quantum computing chip, called Sycamore.
“Our machine performed the target computation in 200 seconds, and from measurements in our experiment we determined that it would take the world’s fastest supercomputer 10,000 years to produce a similar output,” Google researchers said in a blog post about the work.
The news, which leaked into the limelight in September with a premature paper publication, offers evidence that quantum computers could break out of research labs and head toward mainstream computing. They could perform important work like creating new materials, designing new drugs at the molecular level, optimizing financial investments and speeding up delivery of packages. And the quantum computing achievement comes as progress with classical computers, as measured by the speed of general-purpose processors and charted by Moore’s Law, has sputtered.
Google’s Sputnik moment
Google got to pick its speed test, but Hartmut Neven, one of the researchers, dismissed criticisms that the result is only a narrow victory.
“Sputnik didn’t do much either. It circled the Earth. Yet it was the start of the space age,” Neven said at a press conference. He spoke at Google’s quantum computing lab in Santa Barbara, California, which is on the site of an actual Space Race milestone — the development of the Apollo missions’ lunar rover.
But it’s not the beginning of the end for classical computers, at least in the view of today’s quantum computing experts. Quantum computers are finicky, exotic and have to run in an extremely controlled environment, and they’re not likely to replace most of what we do today on classical computers.
Instead, quantum computers will function as accelerators for classical machines, useful enough to be essential. “It will be a must-have a resource at some point,” Neven said
Watch this:Quantum computing is the new super supercomputer
4:11
A vast industry is devoted to improving classical computers, but a small number of expensive labs at companies such as Google, Intel, Microsoft, Honeywell, Rigetti Computing and IBM are pursuing general-purpose quantum computers, too. They’re finicky devices, running in an environment chilled to just a hair’s breadth above absolute zero to minimize the likelihood they’ll be perturbed. Don’t expect to find a quantum computer on your desk.
Google’s speed test has applications to computing work like artificial intelligence, materials science and random number generation, the paper said.
Google’s first customers — the US Department of Energy and automakers Daimler and Volkswagen — will be able to use the machine in 2020, Google said. As with IBM’s quantum computing effort, it’ll be available as a cloud computing service over the internet.
However, physicist Jim Preskill, who came up with the term “quantum supremacy” in 2012, dashed some cold water on that idea. Google’s chosen test is good for showing quantum computing speed but “not otherwise a problem of much practical interest,” Preskill said in October after the paper’s premature release.
Quantum vs. classical computers
Nearly every digital device so far, from ENIAC in 1945 to Apple’s iPhone 11 in 2019, is a classical computer. Their electronics rely on logic circuits to do things like add two numbers and on memory cells to store the results.
Google quantum computer looks nothing like a conventional machine. When running, all this complexity is hidden away and refrigerated to near absolute zero.
Google
Quantum computers are entirely different, reliant instead on the mind-bending rules of physics that govern ultrasmall objects like atoms.
Where classical computers store and process data as individual bits, each a 1 or a 0, quantum computers use a different foundation, called a qubit. Each qubit can store a combination of different states of 1 and 0 at the same time through a phenomenon called superposition. Told you it was weird.
Not only that, but multiple qubits can be ganged together through another quantum phenomenon called entanglement. That lets a quantum computer explore a vast number of possible solutions to a problem at the same time.
Exponential speedups
In principle, a quantum computer’s performance grows exponentially: add one more qubit, and you’ve doubled the number of solutions you can examine in one fell swoop. For that reason, quantum computing engineers are working to increase the number of qubits in their machines.
“We expect that their computational power will continue to grow at a double-exponential rate,” the Google researchers said in their paper. That’s even faster than the single exponential improvement charted for classical computer chips by Moore’s Law.
Google’s machine had 54 qubits, though one wasn’t working right, so only 53 were available. That happens to match the number in IBM’s most powerful quantum computer.
But qubit count isn’t everything. Unavoidable instabilities cause qubits to lose their data. To counteract that problem, researchers are also working on error correction techniques to let a calculation sidestep those problems.
IBM challenges Google’s quantum results
IBM is a major quantum computing fan, but it questioned Google’s prematurely released results in a blog post Monday.
“We argue that an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity,” IBM researchers wrote. They suggested different algorithms and a different classical computer design in a preprint paper of their own.
Google said it welcomes improvements to quantum computer simulation techniques but said its overall result is “prohibitively hard for even the world’s fastest supercomputer, with more double exponential growth to come. We’ve already peeled away from classical computers, onto a totally different trajectory.”
And you can try for yourself if you like. Google released its quantum computer’s raw output to encourage others to see if they can do better at simulating a quantum computer. “We expect that lower simulation costs than reported here will eventually be achieved, but we also expect that they will be consistently outpaced by hardware improvements on larger quantum processors,” the Google researchers said.
Intel didn’t offer an opinion on Google’s results, but did say quantum supremacy is “a strategic benchmark.”
“We are committed to moving quantum from the lab to commercialization,” said Jim Clarke, Intel Labs’ director of quantum hardware, in a statement.
Cracking your encrypted communications? Not yet
One quantum computing ability, mathematically proved with an idea called Shor’s algorithm, is cracking some of today’s encryption technology.
However, that will require vastly larger quantum computers and new technology breakthroughs to deal with error correction.
“Realizing the full promise of quantum computing (using Shor’s algorithm for factoring, for example) still requires technical leaps,” the researchers said in their paper.
And at the same time, the US government and others are working on “post-quantum” cryptography methods to withstand quantum computing cracking abilities.
So for now at least, quantum computing, while radically different, isn’t blowing up the tech industry.
First published Oct. 23 at 2:15 a.m. PT. Updates at 3:09 a.m., 7:41 a.m., 10:13 a.m. and 11:09 a.m. PT: Adds more detail, comment from Google CEO and comments from a Google press event.
Try to remember that last dinner you went out for. Perhaps you can remember the taste of that delicious pasta, the sounds of the jazz pianist in the corner, or that boisterous laugh from the portly gentleman three tables over. What you probably can’t remember is putting any effort into remembering any of these little details.
Somehow, your brain has rapidly processed the experience and turned it into a robust, long-term memory without any serious effort from yourself. And, as you reflect on that meal today, your brain has generated a high-definition movie of the meal from memory, for your mental viewing pleasure, in a matter of seconds.
Undoubtedly, our ability to create and retrieve long-term memories is a fundamental part of the human experience – but we still have lots to learn about the process. For instance, we lack a clear understanding of how different brain regions interact in order to form and retrieve memories. But our recent study sheds new light on this phenomenon by showing how neural activity in two distinct brain regions interact during memory retrieval.
The hippocampus, a structure located deep within the brain, has long been seen as a hub for memory. The hippocampus helps “glue” parts of the memory together (the “where” with the “when”) by ensuring that neurons fire together. This is often referred to as “neural synchronisation”. When the neurons that code for the “where” synchronise with the neurons that code for the “when”, these details become associated through a phenomenon known as “Hebbian learning”.
But the hippocampus is simply too small to store every little detail of a memory. This has lead researchers to theorise that the hippocampus calls upon the neocortex – a region which processes complex sensory details such as sound and sight – to help fill in the details of a memory.
The neocortex does this by doing the exact opposite of what the hippocampus does – it ensures that neurons do not fire together. This is often referred to as “neural desynchronisation”. Imagine asking an audience of 100 people for their names. If they synchronise their response (that is, they all scream out at the same time), you’re probably not going to understand anything. But if they desynchronise their response (that is, they take turns speaking their names), you’re probably going to gather a lot more information from them. The same is true for neocortical neurons – if they synchronise, they struggle to get their message across, but if they desynchronise, the information comes across easily.
Our research found that the hippocampus and neocortex do in fact work together when recalling a memory. This happens when the hippocampus synchronises its activity to glue parts of the memory together, and later help to recall the memory. Meanwhile, the neocortex desynchronises its activity to help process information about the event and later help process information about the memory.
Of cats and bicycles
We tested 12 epilepsy patients between 24 and 53 years of age. All had electrodes place directly within the brain tissue of their hippocampus and neocortex as part of the treatment for their epilepsy. During the experiment, patients learned associations between different stimuli (such as words, sounds and videos), and later recalled these associations. For example, a patient may be shown the word “cat” followed by a video of a bike cycling down a street.
The patient would then try and create a vivid link between the two (perhaps the cat riding the bike) to help them remember the association between the two items. Later, they would be presented with one of the items and asked to recall the other. The researchers then examined how the hippocampus interacted with the neocortex when the patients were learning and recalling these associations.
During learning, neural activity in the neocortex desynchronised and then, around 150 milliseconds later, neural activity in the hippocampus synchronised. Seemingly, information about the sensory details of the stimuli was first being processed by the neocortex, before being passed to the hippocampus to be glued together.
Fascinatingly, this pattern reversed during retrieval – neural activity in the hippocampus first synchronised and then, around 250 milliseconds later, neural activity in the neocortex desynchronised. This time, it appeared that the hippocampus first recalled a gist of the memory and then began to ask the neocortex for the specifics.
Our findings support a recent theory which suggests that a desynchronised neocortex and synchronised hippocampus need to interact to form and recall memories.
While brain stimulation has become a promising method for boosting our cognitive facilities, it has proved difficult to stimulate the hippocampus to improve long-term memory. The key problem has been that the hippocampus is located deep within the brain and is difficult to reach with brain stimulation that is applied from the scalp. But the findings from this study present a new possibility. By stimulating the regions in the neocortex that communicate with the hippocampus, perhaps the hippocampus can be indirectly pushed to create new memories or recall old ones.
Understanding more about the way the hippocampus and neocortex work together when forming and recalling memories could be important for further developing new technologies that could help improve memory for those suffering from cognitive impairments such as dementia, as well as boosting memory in the population at large.