https://www.wired.com/story/googles-head-quantum-computing-hardware-resigns/

Google’s Head of Quantum Computing Hardware Resigns

John Martinis brought a long record of quantum computing breakthroughs when he joined Google in 2014. He quit after being reassigned to an advisory role.
microprocessor
Google in September said it demonstrated quantum supremacy, when its quantum computer solved a math problem that would have taken a conventional computer more than 10,000 years.PHOTOGRAPH: GOOGLE/REUTERS

IN LATE OCTOBER 2019, Google CEO Sundar Pichai likened the latest result from the company’s quantum computing hardware lab in Santa Barbara, California, to the Wright brothers’ first flight.

One of the lab’s prototype processors had achieved quantum supremacy—evocative jargon for the moment a quantum computer harnesses quantum mechanics to do something seemingly impossible for a conventional computer. In a blog post, Pichai said the milestone affirmed his belief that quantum computers might one day tackle problems like climate change, and the CEO also name-checked John Martinis, who had established Google’s quantum hardware group in 2014.

Here’s what Pichai didn’t mention: Soon after the team had first got its quantum supremacy experiment working a few months earlier, Martinis says, he had been reassigned from a leadership position to an advisory one. Martinis tells WIRED that the change led to disagreements with Hartmut Neven, the longtime leader of Google’s quantum project.

Martinis resigned from Google early this month. “Since my professional goal is for someone to build a quantum computer, I think my resignation is the best course of action for everyone,” he adds.

A Google spokesman did not dispute this account, and says that the company is grateful for Martinis’ contributions and that Neven continues to head the company’s quantum project. Parent company Alphabet has a second, smaller, quantum computing group at its X Labs research unit. Martinis retains his position as a professor at the UC Santa Barbara, which he held throughout his tenure at Google, and says he will continue to work on quantum computing.

Google’s quantum computing project was founded by Neven, who pioneered Google’s image search technology, in 2006, and initially focused on software. To start, the small group accessed quantum hardware from Canadian startup D-Wave Systems, including in collaboration with NASA.

Image may contain: Plan, Diagram, and Plot

The WIRED Guide to Quantum Computing

Everything you ever wanted to know about qubits, superpositioning, and spooky action at a distance.

The project took on greater scale and ambition when Martinis joined in 2014 to establish Google’s quantum hardware lab in Santa Barbara, bringing along several members of his university research group. His nearby lab at UC Santa Barbara had produced some of the most prominent work in the field over the past 20 years, helping to demonstrate the potential of using superconducting circuits to build qubits, the building blocks of quantum computers.

Qubits are analogous to the bits of a conventional computer, but in addition to representing 1s and 0s, they can use quantum mechanical effects to attain a third state, dubbed a superposition, something like a combination of both. Qubits in superposition can work through some very complex problems, such as modeling the interactions of atoms and molecules, much more efficiently than conventional computer hardware.

How useful that is depends on the number and reliability of qubits in your quantum computing processor. So far the best demonstrations have used only tens of qubits, a far cry from the hundreds or thousands of high quality qubits experts believe will be needed to do useful work in chemistry or other fields. Google’s supremacy experiment used 53 qubits working together. They took minutes to crunch through a carefully chosen math problem the company calculated would take a supercomputer on the order of 10,000 years, but does not have a practical application.

Martinis leaves Google as the company and rivals that are working on quantum computing face crucial questions about the technology’s path. AmazonIBM, and Microsoft, as well as Google offer their prototype technology to companies such as Daimler and JP Morgan so they can run experiments. But those processors are not large enough to work on practical problems, and it is not clear how quickly they can be scaled up.

When WIRED visited Google’s quantum hardware lab in Santa Barbara last fall, Martinis responded optimistically when asked if his hardware team could see a path to making the technology practical. “I feel we know how to scale up to hundreds and maybe thousands of qubits,” he said at the time. Google will now have to do it without him.

https://medicalxpress.com/news/2020-04-machine-algorithm-brain-computer-interfaces-recalibration.html

New machine learning algorithm reduces need for brain-computer interfaces to undergo recalibration

brain
Credit: CC0 Public Domain

Researchers from Carnegie Mellon University (CMU) and the University of Pittsburgh (Pitt) have published research in Nature Biomedical Engineering that will drastically improve brain-computer interfaces and their ability to remain stabilized during use, greatly reducing or potentially eliminating the need to recalibrate these devices during or between experiments.

Brain-computer interfaces (BCI) are devices that enable individuals with motor disabilities such as paralysis to control prosthetic limbs, computer cursors, and other interfaces using only their minds. One of the biggest problems facing BCI used in a  is instability in the neural recordings themselves. Over time, the signals picked up by BCI can vary, and a result of this variation is that an individual can lose the ability to control their BCI.

As a result of this loss of control, researchers ask the user to go through a recalibration session which requires them to stop what they’re doing and reset the connection between their mental commands and the tasks being performed. Typically, another human technician is involved just to get the system to work.

“Imagine if every time we wanted to use our , to get it to work correctly, we had to somehow calibrate the screen so it knew what part of the screen we were pointing at,” says William Bishop, who was previously a Ph.D. student and postdoctoral fellow in the Department of Machine Learning at CMU and is now a fellow at Janelia Farm Research Campus. “The current state of the art in BCI technology is sort of like that. Just to get these BCI devices to work, users have to do this frequent recalibration. So that’s extremely inconvenient for the users, as well as the technicians maintaining the devices.”

The paper, “A stabilized brain-computer interface based on neural manifold alignment,” presents a machine learning algorithm that accounts for these varying signals and allows the individual to continue controlling the BCI in the presence of these instabilities. By leveraging the finding that neural population activity resides in a low-dimensional “neural manifold,” the researchers can stabilize neural activity to maintain good BCI performance in the presence of recording instabilities.

“When we say ‘stabilization,’ what we mean is that our neural signals are unstable, possibly because we’re recording from different neurons across time,” explains Alan Degenhart, a postdoctoral researcher in electrical and computer engineering at CMU. “We have figured out a way to take different populations of neurons across time and use their information to essentially reveal a common picture of the computation that’s going on in the brain, thereby keeping the BCI calibrated despite neural instabilities.”

The researchers aren’t the first to propose a method for self-recalibration; the problem of unstable neural recordings has been up in the air for a long time. A few studies have proposed self-recalibration procedures, but have faced the issue of dealing with instabilities. The method presented in this paper is able to recover from catastrophic instabilities because it doesn’t rely on the subject performing well during the recalibration.

“Let’s say that the instability were so large such that the subject were no longer able to control the BCI,” explains Byron Yu, a professor of electrical and computer engineering and biomedical engineering at CMU. “Existing self-recalibration procedures are likely to struggle in that scenario, whereas in our method, we’ve demonstrated it can in many cases recover from those catastrophic instabilities.”

“Neural recording instabilities are not well characterized, but it’s a very large problem,” says Emily Oby, a postdoctoral researcher in neurobiology at Pitt. “There’s not a lot of literature we can point to, but anecdotally, a lot of the labs that do clinical research with BCI have to deal with this issue quite frequently. This work has the potential to greatly improve the clinical viability of BCIs, and to help stabilize other neural interfaces.”

Other authors on the paper include CMU’s Steve Chase, professor of  and the Neuroscience Institute, and Pitt’s Aaron Batista, associate professor of bioengineering, and Elizabeth Tyler-Kabara, associate professor of neurological surgery. This research was funded by the Craig H Neilsen Foundation, the National Institutes of Health, DSF Charitable Foundation, National Science Foundation, PA Dept of Health Research, and the Simons Foundation.


Explore further

Researchers discover how the brain changes when mastering a new skill


More information: Stabilization of a brain–computer interface via the alignment of low-dimensional spaces of neural activity, Nature Biomedical Engineering (2020). DOI: 10.1038/s41551-020-0542-9 , https://www.nature.com/articles/s41551-020-0542-9

Journal information: Nature Biomedical Engineering

https://www.tomshardware.com/news/windows-raspberry-pi-xp-linux-raspbian-professional

New Raspberry Pi OS Looks Like Windows XP

(Image credit: Pi Lab)

While you can’t quite have the full Windows XP experience on a Raspberry Pi, this Linux Raspbian XP Professional operating system (OS) from Pi Lab definitely gets close. It’s designed to run on the Raspberry Pi 4the only model powerful enough to handle it.

Linux Raspbian XP Professional comes with a number of features that are reminiscent of the old XP OS. It has a working Start Menu complete with a usable search bar at the top. All of the menus, icons and taskbars have the classic bubbly XP . They even included the complete LibreOffice suite in lieu of Microsoft Office needs.

Since this is Raspbian with an XP overlay, you won’t be able to run XP applications as-is. It is possible to run Windows software from that era, however. You just need the right emulator. If you want to run a native Windows application, you can use the built-in Windows 98 virtual machine.

The OS is preloaded with several emulation platforms, like BOX86 ,that can run old PC games. You can also take advantage of other emulators, such as DOSBox, Mupen64 and MAME (here’s how to run emulators on Raspberry Pi 4). By connecting a USB controller, the whole system doubles as a retro gaming console.

This is still a work in progress, so expect a few updates in the future. In the meantime, check out the current build and see what it’s all about. You can visit the official Pi Lab channel on YouTube for installation details and new editions.

Raspberry Pi 4 Review: The New Gold Standard for Single-Board Computing
The long-anticipated Raspberry Pi 4 takes Pi to another level, with performance that’s good enough to use in a pinch as a desktop PC, plus the ability to output 4K video at 60 Hz or power dual monitors.

https://phys.org/news/2020-04-photonic-microwave-on-chip-optical-frequency.html

Photonic microwave generation using on-chip optical frequency combs

Photonic microwave generation using on-chip optical frequency combs
Photograph of the silicon nitride photonic chips used for frequency comb and photonic microwave generation. Credit: Junqiu Liu and Jijun He (EPFL)

In our information society, the synthesis, distribution, and processing of radio and microwave signals are ubiquitous in wireless networks, telecommunications, and radars. The current tendency is to use carriers in higher frequency bands, especially with looming bandwidth bottlenecks due to demands for, for example, 5G and the “Internet of Things.” ‘Microwave photonics,’ a combination of microwave engineering and optoelectronics, might offer a solution.

A key building block of  photonics is , which provide hundreds of equidistant and mutually coherent laser lines. They are ultrashort  emitted with a stable repetition rate that corresponds precisely to the frequency spacing of comb lines. The photodetection of the pulses produces a microwave carrier.

In recent years there has been significant progress on chip-scale frequency combs generated from nonlinear microresonators driven by continuous-wave lasers. These frequency combs rely on the formation of dissipative Kerr solitons, which are ultrashort coherent light pulses circulating inside optical microresonators. Because of this, these frequency combs are commonly called ‘soliton microcombs.’

Generating soliton microcombs requires nonlinear microresonators, and these can be directly built on-chip using CMOS nanofabrication technology. Co-integration with electronic circuitry and integrated lasers paves the way to comb miniaturization, allowing a host of applications in metrology, spectroscopy and communications.

Publishing in Nature Photonics, an EPFL research team led by Tobias J. Kippenberg has now demonstrated integrated soliton microcombs with repetition rates as low as 10 GHz. This was achieved by significantly lowering the optical losses of integrated photonic waveguides based on , a material already used in CMOS micro-electronic circuits, and which has also been used in the last decade to build photonic integrated circuits that guide laser light on-chip.

The scientists were able to manufacture silicon nitride waveguides with the lowest loss in any photonic integrated circuit. Using this technology, the generated coherent soliton pulses have repetition rates in both the microwave K- (~20 GHz, used in 5G) and X-band (~10 GHz, used in radars).

The resulting microwave signals feature phase noise properties on par with or even lower than commercial electronic microwave synthesizers. The demonstration of integrated soliton microcombs at microwave repetition rates bridges the fields of integrated photonics, nonlinear optics and microwave photonics.

The EPFL team achieved a level of optical losses low enough to allow light to propagate nearly 1 meter in a waveguide that is only 1 micrometer in diameter, or about 100 times smaller than a human hair. This loss level is still more than three orders of magnitude higher than the value in optical fibers, but represents the lowest loss in any tightly confining waveguide for integrated nonlinear photonics to date.

Such low loss is the result of a new manufacturing process developed by EPFL scientists—the ‘silicon nitride photonic Damascene process.’ “This process, when carried out using deep-ultraviolet stepper lithography, gives truly spectacular performance in terms of low loss, which is not attainable using conventional nanofabrication techniques,” says Junqiu Liu, the paper’s first author who also leads the fabrication of silicon nitride nanophotonic chips at EPFL’s Center of MicroNanoTechnology (CMi). “These microcombs, and their , could be critical elements for building fully integrated low-noise microwave oscillators for future architectures of radars and information networks.”

The EPFL team is already working with collaborators in the US to develop hybrid-integrated soliton microcomb modules that combine chip-scale semiconductor lasers. These highly compact microcombs can impact many applications—e.g. transceivers in datacenters, LiDAR, compact optical atomic clocks, optical coherence tomography, , and spectroscopy.


Explore further

Scientists build the smallest optical frequency comb to-date


More information: Photonic microwave generation in the X- and K-band using integrated soliton microcombs, Nature Photonics (2020). DOI: 10.1038/s41566-020-0617-x , https://www.nature.com/articles/s41566-020-0617-x

Journal information: Nature Photonics

https://www.creativebloq.com/news/mac-pro-wheels-kit

Apple’s New Mac Pro Wheels Kit is utterly mind-boggling

It’s a testament to the ridiculousness of Apple’s newly-released Mac Pro Wheels Kit that, in a world where everything is ridiculous right now, it still seems really, really ridiculous. Sure, the wheels might be made of “custom-designed stainless steel and rubber”. Sure, they might “make it easy to move your Mac Pro around.” But for $699? ridiculous.

The kit, which appeared on Apple’s website last week, includes four wheels, an installation guide (which we hope is printed on very high quality paper) and a 1/4-inch to 4 mm hex bit (a tool, we assume – no doubt solid gold). Adding insult to expensive injury, Apple notes that “additional tools are necessary” for the installation of the wheels. Sure, the Mac Pro is an extremely powerful machine (which may well enter our list of the best computers for graphic design) but… $699… for wheels.

The wheels were already available as an add-on when purchasing the Mac Pro. We were shocked back then that, at $100 per wheel, they added $400 to the price of an already eye-wateringly expensive machine. But now that the four wheels are available as a separate purchase for an extra $299, we find ourselves compelled to ask: is Apple okay? If anyone is able to explain the extra cost for buying them separately, we’re all ears. Apple doesn’t even appear to have reinvented the wheels – which means your Mac Pro could still be prone to wandering off thanks to the absence of brakes. It goes without saying that we’re not alone in our surprise about the price:

QuaranTaco@PaulLovesTacos

Why would I need a Mac Pro, and more quizzically, why would I need a Mac Pro on wheels?

QuaranTaco@PaulLovesTacos

I just priced a fully decked out Mac Pro and it is the same price as a new one ton diesel pickup truck. Which is probably why it needs wheels. So you can ride it around and feel like you’re getting your money’s worth.

See QuaranTaco’s other Tweets

We accept, of course, that the Mac Pro isn’t aimed at most creatives – it’s more likely to be found inside a high-powered production suite than your average apartment, but Apple is definitely on a roll when it comes to expensive accessories. You could bag yourself a new iPad for the same price as the $399 Magic Keyboard for the 12.9-inch iPad Pro, for example. And if you did originally opt for wheels with the Mac Pro and would prefer a set of feet instead, Apple has also released a new Feet Kit for $299. Bargain.

Mac Pro

The Mac Pro (with feet, not wheels) (Image credit: Apple)

If the price isn’t a barrier, there’s no denying that the Mac Pro is one hell of a powerful machine for creatives. But if you’re looking for a more portable (and affordable) powerhouse, check out the best MacBook Pro deals below.

https://physicsworld.com/a/peeking-inside-our-brains-can-mri-quantify-axonal-features/

Peeking inside our brains: can MRI quantify axonal features?

20 Apr 2020 Irina Grigorescu 
Confocal microscopy and dMRI

An international team of researchers has established a way to non-invasively measure the radii of axons, fine nerve fibres in the brain, using diffusion MRI (dMRI). Their proposed method showed good agreement with histological studies in both rodents and humans, and outperformed previous techniques in which reported measurements were an order of magnitude larger than histologically-derived axonal sizes (eLife 10.7554/eLife.49855).

Axons are the wire-like protrusions of a neuronal cell, involved in conducting electrical impulses away from the cell body. They are microscopic in diameter and, as bundles, comprise the primary form of communication of the nervous system. Clinical and histological studies have shown that axon radii can range from 0.1 µm to more than 3 µm in the human brain, and that this size, along with myelination, is responsible for the speed of neuronal communication.

In addition, clinical studies of neurodegenerative diseases, such as multiple sclerosis, have revealed preferential damage to smaller axons, while an electron microscopy study involving subjects with autism spectrum disorder showed a significant difference in axon size distribution compared with healthy controls.

Non-invasive axon radii quantification…

Clearly, accurate quantification of axon radius is an important neuroimaging biomarker. This, however, has proven to be a highly challenging task to perform non-invasively. Nevertheless, a team of researchers from the Champalimaud Centre for the Unknown in Portugal, NYU Grossman School of Medicine in the USA and the Cardiff University Brain Research Imaging Centre (CUBRIC) in the UK, set out to achieve just that.

The team used dMRI, a non-invasive and non-ionizing imaging modality that measures the random motion of water molecules in tissues, revealing details of the tissue microarchitecture. The key novelty of the approach is the researchers’ proposed method to separate MRI signals originating from different compartments of the probed tissue. More specifically, they modelled how the water signal behaves in different tissue types, and thereby managed to suppress signal arising from surrounding tissue outside the axons. In doing so, they could more accurately quantify the properties of the axons in the probed brain tissue samples.

…shows unprecedented agreement with histology

To validate their work, the researchers first tested their model on rodents using high-resolution (100 x 100 x 850 µm) dMRI images acquired on a 16.4T MR scanner (Bruker BioSpin). After scanning, they collected 50 µm-thick slices from two rat brains, corresponding to the imaged volume, and stained them to highlight the axons. Finally, they used confocal microscopy to visualize and analyse the tissue slices. Their results showed that the median MR-derived effective axon radius was 3–13% larger than the median radius derived from histology, an error that could be due to shrinkage of the filaments through staining.

Histological validation

The second part of the study focused on human subjects, who were imaged using a lower resolution (3 x 3 x 3 mm) Siemens Connectom 3T MR scanner. Here, the team used histological values reported in the literature for comparison. Despite this, the researchers showed that their method was able to accurately estimate known axonal sizes with errors an order of magnitude smaller than those in previous MRI studies.

The researchers conclude that their study revealed “a realistic perspective on MR axon radius mapping by showing MR-derived effective radii that have good quantitative agreement with histology”. Thinking about the next steps, first author Jelle Veraart, adds: “The non-invasive quantification of axon diameters using MRI allows clinicians and researchers to identify problems and developmental pathways that arise in the depths of the brain, driving forward treatment and understanding of development and disease progression.”

https://phys.org/news/2020-04-self-aligning-microscope-limits-super-resolution-microscopy.html

Self-aligning microscope smashes limits of super-resolution microscopy

Self-aligning microscope smashes limits of super-resolution microscopy
A T cell with precise localisation of T cell receptors (pink) and CD45 phosphatase (green). Credit: Single Molecule Science

An ultra-precise microscope that surpasses the limitations of Nobel Prize-winning super-resolution microscopy will let scientists directly measure distances between individual molecules.

UNSW  have achieved unprecedented resolution capabilities in single-molecule microscopy to detect interactions between individual  within intact .

The 2014 Nobel Prize in Chemistry was awarded for the development of super-resolution fluorescence microscopy technology that afforded microscopists the first molecular view inside cells, a capability that has provided new molecular perspectives on complex  and processes.

Now the limit of detection of single-molecule microscopes has been smashed again, and the details are published in the current issue of Science Advances.

While individual molecules could be observed and tracked with super-resolution microscopy already, interactions between these molecules occur at a scale at least four times smaller than that resolved by existing single-molecule microscopes.

“The reason why the localisation precision of single-molecule microscopes is around 20-30 nanometres normally is because the  actually moves while we’re detecting that signal. This leads to an uncertainty. With the existing super-resolution instruments, we can’t tell whether or not one protein is bound to another protein because the distance between them is shorter than the uncertainty of their positions,” says Scientia Professor Katharina Gaus, research team leader and Head of UNSW Medicine’s EMBL Australia Node in Single Molecule Science.

To circumvent this problem, the team built autonomous feedback loops inside a single-molecule microscope that detects and re-aligns the optical path and stage.

“It doesn’t matter what you do to this microscope, it basically finds its way back with precision under a nanometre. It’s a smart microscope. It does all the things that an operator or a service engineer needs to do, and it does that 12 times per second,” says Professor Gaus.

Measuring the distance between proteins

With the design and methods outlined in the paper, the feedback system designed by the UNSW team is compatible with existing microscopes and affords maximum flexibility for sample preparation.

“It’s a really simple and elegant solution to a major imaging problem. We just built a microscope within a microscope, and all it does is align the main microscope. That the solution we found is simple and practical is a real strength as it would allow easy cloning of the system, and rapid uptake of the new technology,” says Professor Gaus.

To demonstrate the utility of their ultra-precise feedback single-molecule microscope, the researchers used it to perform direct distance measurements between signalling proteins in T cells. A popular hypothesis in cellular immunology is that these immune cells remain in a resting state when the T cell receptor is next to another molecule that acts as a brake.

Their high precision microscope was able to show that these two signalling molecules are in fact further separated from each other in activated T cells, releasing the brake and switching on T cell receptor signalling.

“Conventional microscopy techniques would not be able to accurately measure such a small change as the distance between these signalling molecules in resting T cells and in activated T cells only differed by 4–7 nanometres,” says Professor Gaus.

“This also shows how sensitive these signalling machineries are to spatial segregation. In order to identify regulatory processes like these, we need to perform precise distance measurements, and that is what this microscope enables. These results illustrate the potential of this technology for discoveries that could not be made by any other means.”

Postdoctoral researcher, Dr. Simao Pereira Coelho, together with Ph.D. student Jongho Baek—who has since been awarded his Ph.D. degree—led the design, development, and building of this system. Dr. Baek also received the Dean’s Award for Outstanding Ph.D. Thesis for this work.


Explore further

Combination of microscopy techniques makes images twice as sharp


More information: Simao Coelho et al. Ultraprecise single-molecule localization microscopy enables in situ distance measurements in intact cells, Science Advances (2020). DOI: 10.1126/sciadv.aay8271

Journal information: Science Advances

https://www.hindustantimes.com/tech/mozilla-firefox-users-need-to-update-the-browser-immediately-cert-in-alerts/story-ldYzJk3yOg8QUOZJU5dFbM.html

Mozilla Firefox users need to update the browser immediately: CERT-in alerts

Indian Computer Emergency Response Team (CERT-In) has issued an advisory alerting users about the vulnerabilities in the Mozilla Firefox internet browser

TECH Updated: Apr 18, 2020 19:49 IST

HT Correspondent
HT Correspondent

Hindustan Times, New Delhi
Indian Computer Emergency Response Team (CERT-In) has issued an advisory alerting users about the vulnerabilities in the Mozilla Firefox internet browser
Indian Computer Emergency Response Team (CERT-In) has issued an advisory alerting users about the vulnerabilities in the Mozilla Firefox internet browser (REUTERS)

The Indian Computer Emergency Response Team (CERT-In) has issued an advisory alerting Mozilla Firefox users about multiple vulnerabilities in the internet browser and has asked that they update it immediately.

The CERT-In advisory states that these browser vulnerabilities can be exploited by remote attackers to obtain sensitive information via the browser and execute arbitrary code on the targeted system.

CERT-In has rated the severity as ‘High’ on all Mozilla Firefox browsers prior to version 75 and Mozilla Firefox ESR prior to version 68.7 which have been affected. The advisory thus recommends that everyone update their browser to the latest version immediately.

“Out-of-Bounds Read Vulnerability in Mozilla Firefox ( CVE-2020-6821 ). This vulnerability exists in Mozilla Firefox due to a boundary condition when using WebGLcopyTexSubImage method. A remote attacker could exploit this vulnerability by specially crafted web pages.Successful exploitation of this vulnerability could allow a remote attacker to disclose sensitive information,” the advisory said.

According to reports, another vulnerability exists in Mozilla Firefox due to a boundary condition in GMP Decode Data while processing images larger than 4GB on 32-bit builds. A remote attacker can exploit this vulnerability by specially crafted images and trick the victim into opening it. If this vulnerability is successfully exploited that could allow an attacker to “execute arbitrary code on the target system”.

A remote attacker can also exploit another vulnerability by “persuading a victim to install a crafted extension. Successful exploitation of this vulnerability could allow a remote attacker to disclose sensitive information”.

“Information Disclosure Vulnerability in Mozilla Firefox ( CVE-2020-6824). This vulnerability exists in Mozilla Firefox to generate a password for a site but leaves Firefox open.A remote attacker could exploit this vulnerability by revisiting the same site of the victim and generating a new password. The generated password will remain the same on the targeted system,” the advisory added.

Other vulnerabilities include ‘Buffer Overflow Vulnerability in Mozilla Firefox (CVE-2020-6825)’ and ‘Memory Corruption Vulnerability in Mozilla Firefox (CVE-2020-6826)’.

https://www.cnet.com/how-to/how-to-turn-your-amazon-echo-into-a-free-tv-speaker/

How to turn your Amazon Echo into a free TV speaker

Why spend money on a new soundbar or speaker system when you can use a device you already own?

Katie Conner mugshot
Amazon Fire TV echo alexa
If you have an Amazon Echo, you have all the extra TV speaker you really need.

David Katzmaier/CNET

We’ve all done it — there’s an exceptionally quiet scene in a movie and you turn your TV volume all the way up just to hear what the hushed actors are saying. Or maybe you crank it up to feel the thrill of a car chase. Those are signs you need a better speaker for your TV. Fortunately, your Amazon Echo device is likely compatible with your smart TV, so you don’t have to worry about damaging your eardrums when the next loud scene comes on if your TV speakers just aren’t up to snuff.

The best spot to place your Echo is on the side table so you can hear what’s going on without maxing out the volume. Keep in mind that your Echo speaker is only compatible with smart TV’s that have Bluetooth capabilities. However, you can also connect your Echo to a Fire TV if you don’t have a smart TV.

ReadBest smart TVs for 2020

Watch this: Amazon Echo Studio and new Echo Dot are big on sound…
 2:09

To get started, place your Echo device near the TV you’ll be connecting to and make sure both are plugged in and turned on. Now say, “Alexa, connect” — the voice assistant will start checking for devices to connect to. On your smart TV, navigate to the Bluetooth settings to find the Echo speaker you’re wanting to connect. For example, I might see “Katie’s Echo Dot” or “Living Room Echo Plus.” It’s the same process if you have a Fire TV.

Note that all smart TV’s are different, so the setup may not be exactly the same for you, though it should be close. For example, the TV I used is a Vizio and has an option for Amazon Alexa that explained I needed to download the Vizio SmartCast skill to pair the two devices. When you’re ready to disconnect the speaker from the TV, just say, “Alexa, unpair.”

If you tried this at home and the setup was different for your TV, let us know in the comments. For more tips on connecting your smart speakers, check out how to simultaneously stream music across all your Amazon Echo deviceshow to control your Fire TV with Amazon Echo and 6 Amazon Echo settings you won’t regret changing.

https://bigthink.com/mind-brain/insightful-ideas-can-trigger-orgasmic-brain-signals-finds-study?rebelltitem=4#rebelltitem4

Insightful ideas can trigger orgasmic brain signals, finds study

Research shows how “aha moments” affect the brain and cause the evolution of creativity.

Maps of high-frequency “gamma” EEG activity on head models.

Credit: Drexel University
  • New psychology study shows that some people have increased brain sensitivity for “aha moments”.
  • The researchers scanned brains of participants and noticed orgasm-like signals during insights.
  • The scientists think this evolutionary adaptation drives creation of science and culture.

Coming up with a great insight can cause pleasure similar to an orgasm, find researchers. The eureka moment triggers neural reward signals that can flood some people with pleasure, suggesting it’s an evolutionary adaptation that fuels the growth of creativity.

A recent neuroimaging study from Drexel University discovered that the brain rewards systems of people with higher “reward sensitivity” ratings showed bursts of “gamma” EEG activity when they had creative insights. This signal is similar to those caused by pleasure-inducing experiences like orgasms, great food or drinks that quench thirst.

In carrying out the study, the scientists employed high-density electroencephalograms (EEGs) to track the brain activity of participants who were solving anagram puzzles. The subjects were required to unscramble letters in order to figure out a hidden word. When they had an aha moment of insight, figuring out the solution, the people would press a button, as EEG captured a snapshot of their brain activity.

 

Another part of the study included filling out a questionnaire intended to gauge a person’s “reward sensitivity,” defined by the researchers as “a basic personality trait that reflects the degree to which an individual is generally motivated to gain rewards rather than avoid losing them.”

The scientists found that people scoring high on this rubric had very powerful aha moments. Their brain scans showed an extra burst of high-frequency gamma waves in the reward systems’ orbitofrontal cortex.

People who scored low on reward sensitivity didn’t exhibit such bursts. The researchers wrote that the eureka moments were noticed by them but were “lacking in hedonic content.”

This led the study’s authors, the psychology professor John Kounios and the doctoral candidate Yongtaek Oh, to conclude that some might be seeking out activities that can lead to such moments of insights.

“The fact that some people find insight experiences to be highly pleasurable reinforces the notion that insight can be an intrinsic reward for problem solving and comprehension that makes use of the same reward circuitry in the brain that processes rewards from addictive drugs, sugary foods, or love,” wrote the psychologists.

While the researchers think that creativity is not necessarily critical to human survival, considering that other species managed to survive without it, they see its evolutionary connection.

 

“The fact that evolution has linked the generation of new ideas and perspectives to the human brain’s reward system may explain the proliferation of creativity and the advancement of science and culture,” Kounios stated.

You can read the study in NeuroImage.