New technology can help people with hearing loss better understand speech
Future hearing aid users will be able to target their listening more accurately thanks to new Danish technology. A researcher from Aalborg University uses machine learning to teach a computer program how to remove unwanted noise and enhance speech.
One of the main challenges for people with hearing loss is understanding speech in noisy surroundings. The problem is referred to as the cocktail party effect because situations where many people are talking at the same time often make it very hard to distinguish what is being said by the individual you are talking to.
Even though most modern hearing aids incorporate various forms of speech enhancement technology, engineers are still struggling to develop a system that makes a significant improvement.
PhD student Mathew Kavalekalam from the Audio Lab Analysis at Aalborg University is using machine learning to develop an algorithm that enables a computer to distinguish between spoken words and background noise. The project is done in conjunction with hearing aid researchers from GN Advanced Science and is supported by Innovation Fund Denmark.
Computer listens and learns
“The hearing center inside our brains usually performs a string of wildly complicated calculations that enables us to focus on a single voice – even if there are many other people talking in the background,” explains Mathew Kavalekalam, Aalborg University. “But that ability is very difficult to recreate in a machine.”
Mathew Kavalekalam started out with a digital model that describes how speech is produced in a human body, from the lungs via throat and larynx, mouth and nasal cavities, teeth, lips, etc.
He used the model to describe the type of signal that a computer should ‘listen’ for when trying to identify a talking voice. He then told the computer to start listening and learning.
Noise isn’t just noise
“Background noise differs depending on the environment, from street or traffic noise if you are outside to the noise of people talking in a pub or a cafeteria,” Mathew Kavalekalam says. “That is one of the many reasons why it is so tricky to build a model for speech enhancement that filters the speech you want to hear from the babbling you are not interested in.”
At Aalborg University Mathew Kavalekalam played back various recordings of voices talking to the computer and gradually added different types of background noise at an increasing level.
By applying this machine learning, the computer software developed a way of recognizing the sound patterns and calculating how to enhance the particular sound of talking voices and not the background noise.
Fifteen percent improvement
The result of Kavalekalam’s work is a piece of software that can effectively help people with hearing loss better understand speech. It is able to identify and enhance spoken words even in very noisy surroundings.
So far the model has been tested on ten people who have been comparing speech and background noise with and without the use of Kavalekalam’s algorithm.
The test subjects were asked to perform simple tasks involving color, numbers and letters that were described to them in noisy environments.
The results indicate that Kavalekalam may well have developed a promising solution. Test subjects’ speech perception improved by fifteen percent in very noisy surroundings.
Snappy signal processing
However, there is still some work to be done before Mathew Kavalekalam’s software finds its way into new hearing aids. The technology needs to be tweaked and tuned before it is practically applicable.
The algorithm needs to be optimized to take up less processing power. Even though technology keeps getting faster and more powerful, there are hardware limitations in small, modern hearing aids.
“When it comes to speech enhancement, signal processing needs to be really snappy. If the sound is delayed in the hearing aid, it gets out of sync with the mouth movements and that will end up making you even more confused,” explains Mathew Kavalekalam.
Fact Box
– One in six Europeans experiences various degrees of hearing impairment. Almost everyone loses part of their hearing as they age.
– Hearing loss often manifests itself in problems when trying to participate in conversations with more than one person talking. This can lead to isolation as people with hearing loss often choose to withdraw from social gatherings where they have to spend a lot of energy trying to keep up with what is being said.
Some Men Experience Post-Sex Blues On A Regular Basis, Study Finds
This is the first time research has identified the condition among men.
New Australian research has found that like women, men may also suffer from the condition Postcoital Dysphoria (PCD), which can leave them feeling sad, tearful, or irritable after sex.
Carried out by researchers at Queensland University of Technology (QUT), the world-first study anonymously surveyed 1,208 men from various countries including Australia, the U.S.A., the U.K., Russia, New Zealand, and Germany.
GETTY IMAGES
The participants’ responses showed that 41 per cent reported experiencing PCD at some point in their lifetime, with 20 per cent reporting they had experienced it in the previous four weeks.
Three to four per cent suffered from PCD on a regular basis.
Joel Maczkowiack, one of the study’s co-authors, added that some of the comments from the participants described experiences ranging from “I don’t want to be touched and want to be left alone” to “I feel unsatisfied, annoyed and very fidgety. All I really want is to leave and distract myself from everything I participated in.”
“Another described feeling ’emotionless and empty’ in contrast to the men who experienced the postcoital experience positively, and used descriptors such as a ‘feeling of well-being, satisfaction, contentment’ and closeness to their partner,” he said.
HERO IMAGES VIA GETTY IMAGES
Although PCD has been recognized in women, no studies have previously identified the condition among men.
Co-author Professor Robert Schweitzer said the findings now suggest that a man’s experience of sex could be more complex than previously thought.
“The experience of the resolution phase remains a bit of a mystery and is therefore poorly understood,” said Schweitzer. “It is commonly believed that males and females experience a range of positive emotions including contentment and relaxation immediately following consensual sexual activity.”
“Yet previous studies on the PCD experience of females showed that a similar proportion of females had experienced PCD on a regular basis. As with the men in this new study, it is not well understood. We would speculate that the reasons are multifactorial, including both biological and psychological factors.”
VIEWSTOCK VIA GETTY IMAGES
In addition, PCD could cause problems for both partners and not just the men who experience it.
“It has, for example, been established that couples who engage in talking, kissing, and cuddling following sexual activity report greater sexual and relationship satisfaction, demonstrating that the resolution phase is important for bonding and intimacy,” said Maczkowiack.
“So the negative affective state which defines PCD has potential to cause distress to the individual, as well as the partner, disrupt important relationship processes, and contribute to distress and conflict within the relationship, and impact upon sexual and relationship functioning.”
Researchers adopt metalens technology in a new endoscopic optical imaging catheter to better detect disease, including cancer Credit: Harvard University/Massachusetts General Hospital
The diagnosis of diseases based in internal organs often relies on biopsy samples collected from affected regions. But collecting such samples is highly error-prone due to the inability of current endoscopic imaging techniques to accurately visualize sites of disease. The conventional optical elements in catheters used to access hard-to-reach areas of the body, such as the gastrointestinal tract and pulmonary airways, are prone to aberrations that obstruct the full capabilities of optical imaging.
Now, experts in endoscopic imaging at Massachusetts General Hospital (MGH) and pioneers of flat metalens technology at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), have teamed up to develop a new class of endoscopic imaging catheters—termed nano-optic endoscopes—that overcome the limitations of current systems.
The research is described in Nature Photonics.
“Clinical adoption of many cutting-edge endoscopic microscopy modalities has been hampered due to the difficulty of designing miniature catheters that achieve the same image quality as bulky desktop microscopes,” said Melissa Suter, an assistant professor of Medicine at MGH and Harvard Medical School (HMS) and co-senior author of the paper. “The use of nano-optic catheters that incorporate metalenses into their design will likely change the landscape of optical catheter design, resulting in a dramatic increase in the quality, resolution, and functionality of endoscopic microscopy. This will ultimately increase clinical utility by enabling more sophisticated assessment of cell and tissue microstructure in living patients.”
“Metalenses based on flat optics are a game changing new technology because the control of image distortions necessary for high resolution imaging is straightforward compared to conventional optics, which requires multiple complex shaped lenses,” said Federico Capasso, the Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering at SEAS and co-senior author of the paper. “I am confident that this will lead to a new class of optical systems and instruments with a broad range of applications in many areas of science and technology”
Scanning electron micrograph image of a portion of a fabricated metalens. Credit: Harvard SEAS
“The versatility and design flexibility of the nano-optic endoscope significantly elevates endoscopic imaging capabilities and will likely impact diagnostic imaging of internal organs,” said Hamid Pahlevaninezhad, Instructor in Medicine at MGH and HMS and co-first author of the paper. “We demonstrated an example of such capabilities to achieve high-resolution imaging at greatly extended depth of focus.”
To demonstrate the imaging quality of the nano-optic endoscope, the researchers imaged fruit flesh, swine and sheep airways, and human lung tissue. The team showed that the nano-optic endoscope can image deep into the tissue with significantly higher resolution than provided by current imaging catheter designs.
The images captured by the nano-optic endoscope clearly show cellular structures in fruit flesh and tissue layers and fine glands in the bronchial mucosa of swine and sheep. In the human lung tissue, the researchers were able to clearly identify structures that correspond to fine, irregular glands indicating the presence of adenocarcinoma, the most prominent type of lung cancer.
“Currently, we are at the mercy of materials that we have no control over to design high resolution lenses for imaging,” said Yao-Wei Huang, a postdoctoral fellow at SEAS and co-first author of the paper. “The main advantage of the metalens is that we can design and tailor its specifications to overcome spherical aberrations and astigmatism and achieve very fine focus of the light. As a result, we achieve very high resolution with extended depth of field without the need for complex optical components.”
Next, researchers aim to explore other applications for the nano-optic endoscope, including a polarization-sensitive nano-optic endoscope, which could contrast between tissues that have highly-organized structures, such as smooth muscle, collagen and blood vessels.
The “phantom limb,” a common feeling for amputees around the world can be fixed with an electronic skin (e-dermis). This skin is being developed at Johns Hopkins University by a team of scientists. The skin will layer prosthetic hands to allow amputees feel when they touch objects with a prosthetic limb.
The first volunteer in the research said that “after many years, I felt my hand, as if a hollow shell got filled with life again.”
How Does the E-Dermis Work?
The layer of e-dermis is made of fabric and rubber, and it contains sensors that act as nerve endings. Whatever the sensors feel, they send impulses to the peripheral nerves, creating the feeling of touch and pain.
Luke Osborn is part of the team and a graduate student in biomedical engineering, explaining that the sensor “acts like your own skin would” and that it is inspired by human biology that has “receptors for both touch and pain.”
He added that a prosthetic hand available on the market could be fitted with an e-dermis that will be able to inform the wearer if they pick up “something that is round or whether it has sharp points.”
The e-dermis is very useful, especially in case of pain, as it can inform the user that the device can be potentially damaged.
The research team at Johns Hopkins University has gathered many members of different departments: Biomedical Engineering, Electrical and Computer Engineering, Neurology, and it also contains researchers at the Institute of Neurotechnology from Singapore.
A New Type of Prosthetics
Osborn explains that he wants to change the design of prosthetics and allow amputees to regain “meaningful, tactile feedback or perception.”
Nitish Thakor is the senior author of the study. He is also a professor of biomedical engineering and the director of the Johns Hopkins’ Biomedical Instrumentation and Neuroengineering Laboratory. He explains that the e-dermis can make the user feel a light touch or a painful one, stimulating the amputee’s nerves through the skin and making a prosthesis “more like a human hand.”
The e-dermis uses sensors that encode sensations and send information to the skin. Researchers analyzed electroencephalography of a test subject, seeing that the person could feel those sensations in his prosthetic hand.
After connecting the e-dermis to the volunteer’s skin through transcutaneous electrical nerve stimulation (TENS) the team concluded that the test subject felt a natural reaction to pain after touching a pointed object and no pain after touching a round object.
In the future, Osborn believes that the e-dermis technology can be used in creating gloves and space suits for astronauts.
Can brain stimulation be used to treat neurodegenerative disorders?
By Assistant Professor Jed Meltzer
Mon., July 30, 2018
Brain stimulation has a surprisingly long history, dating back to the Roman Empire. Back then, the shocks from electric fish were used to treat headaches; luckily for us, the technology has come a long way since then.
Over the last few decades, there’s been an explosion of research into the medical benefits of using electricity to stimulate the brain. An accumulation of promising data points to its effectiveness as a treatment for numerous neurological conditions.
Electrical signals are the main way our brain cells communicate with each other to make decisions and control movements. (THE ALZHEIMER’S ASSOCIATION)
The idea is to make brain cells more or less active in certain areas of the brain to help with mental health and age-related brain diseases. Physicians place a device on your head to introduce extra electrical currents into your brain. This may seem simple, but electrical signals are the main way our brain cells communicate with each other to make decisions and control movements.
Treating depression with brain stimulation
Nowadays, a non-invasive, pain-free and targeted type of brain stimulation, transcranial magnetic simulation (TMS), has become an established treatment for depression as an option for patients who do not respond to medication.
Psychiatrists have long suspected that depression involves an imbalance of activity between the two halves of the brain — there seemed to be too much activity on the right and not enough on the left. TMS offers the ability to tweak the brain in a precise manner — a little up on the left or a little down on the right. This type of brain stimulation is routinely offered at major hospitals and increasingly at smaller private psychiatric clinics.
As researchers continue to tweak TMS to personalize it for individuals and extend its benefits, others, like myself, work to grow the use of brain stimulation in medicine. Studies have shown promising results in using TMS to treat the symptoms of brain disorders that afflict millions, such as schizophrenia, stroke, Alzheimer’s disease and Parkinson’s disease.
Up to 70 per cent of people living with Alzheimer’s disease also suffer from depression. This has led me and my colleague, Dr. Linda Mah, to study how this type of brain stimulation could help reduce the symptoms of depression and improve a person’s mood and memory in older adults with Alzheimer’s. In the future, this could become a regular treatment option for Alzheimer’s patients, just as it is for depression.
DIY brain stimulation
TMS involves a large and expensive device, but with its recent successes, there is renewed interest in simpler, more portable devices, including a technique known as transcranial direct current brain stimulation (TDCS). Since TDCS offers greater affordability and ease-of-use, there has been an increasing amount of research and “do-it-yourself” experimentation in this area.
TDCS has also been around for decades, but it’s been more recently that researchers have uncovered evidence that it can cause similar changes in brain function as TMS. Researchers have reported that TDCS can improve dozens of cognitive abilities, from remembering lists of words to solving math problems, depending on where in the brain it is applied. It can even alter what decisions people are likely to make when faced with a hypothetical moral dilemma.
Much like TMS, there is encouraging preliminary data for the use of TDCS in treating depression. But as there is still limited evidence, further studies need to be done to evaluate its effectiveness.
The risks and benefits of TDCS as a treatment for disorders and for routine cognitive enhancement remain uncertain. I urge caution for anyone using these products. While experimenters may not fry their brains, they are quite likely to acquire some nasty burns from improper electrode preparation techniques.
The brain is a delicately balanced organ and while non-invasive brain stimulation doesn’t involve surgery, we are still talking about zapping the brain with electricity. Just like any treatment, brain stimulation comes with its own risks. For example, some studies have shown that stimulation can temporarily improve some cognitive abilities while reducing others at the same time.
Scientists are working hard to figure out what works and what doesn’t with brain stimulation and how it changes brain activity. For those who are curious about this technology, I urge them to consider joining well-designed, well-controlled research studies with a large number of participants, overseen by university and hospital ethics boards.
For those seeking relief from a neurological or psychiatric condition, information on ongoing clinical trials is available at the website clinicaltrials.gov. Research studies on cognitive enhancement in healthy individuals are commonly conducted at universities and research hospitals. These organizations frequently maintain databases of interested research participants and can contact you when a new study begins.
The simplest organic acid detected in a protoplanetary disk for the first time
July 30, 2018 by Tomasz Nowakowski, Phys.org report
Panel a: Observed gas-phase t-HCOOH emission integrated over the line profile after applying a Keplerian mask and smoothed at resolution equivalent to the TW Hya disk size. The contours and level step are at 3σ (where 1σ = 2 mJy beam−1 km s−1, in the smoothed and masked data). Panel b: Same as A but model. The synthesized beam is shown in the bottom left corner of panels A and B. Credit: Favre et al., 2018.
Using the Atacama Large Millimeter/submillimeter Array (ALMA), an international team of researchers has detected formic acid in the circumstellar disk of the TW Hydrae system. It is the first discovery of the simplest carboxylic acid in a protoplanetary disk. The finding is reported in a paper published July 16 on arXiv.org.
Located some 194 light years away from the Earth in the constellation of Hydra, TW Hydrae (TW Hya for short) is a T Tauri star less than 10 million years old, with a mass of approximately 0.7 solar masses. The star is assumed to be orbited by an ice giant planet at a distance of some 22 AU from it.
TW Hydrae is known for its gas-rich circumstellar disk with a mass of more than 0.006 solar masses. The disk shows prominent rings and gaps in gas and dust emission, what could indicate an ongoing planet formation.
Studying such disks as the one in TW Hydrae could provide essential information about planetary formation processes. It is crucial for astronomers to find out whether and what organic molecules are synthesized in protoplanetary disks as the chemical composition of these structures might shape the properties of the emerging planetary system.
With that aim in mind, a team of astronomers led by Cecile Favre of the Arcetri Observatory in Italy conducted ALMA observations of TW Hydrae in mid-2016, focused on the detection of formic acid (HCOOH). It is the simplest carboxylic acid and key organic molecule as the carboxyl group is one of the main functional groups of amino acids.
The observational campaign resulted in the discovery of the 129 GHz HCOOH line with a signal-to-noise ratio of about 4.0.
“Here, we report the first detection of HCOOH with ALMA towards the protoplanetary disk surrounding the closest solar-type young star TW Hya,” the researchers wrote in the paper.
According to the study, the emission of formic acid in TW Hydrae appears to be centrally peaked with extension beyond 200 AU. The scientists added that although their low-resolution observations did not allow them to constrain where, exactly, formic acid is emitting within the disk, it is generally assumed that all organic oxygen-bearing molecules emit within the same region if they share grain-surface formation chemistry.
Given that methanol (CH3OH), which is perceived as a starting molecule from which more complex organics are synthesized, was previously detected towards TW Hydrae, the researchers also calculated the fraction of formic acid with respect to methanol. They found that this ratio is not higher than 1.0, which is approximately one order of magnitude greater than the ratio measured in comets.
Notably, the study conducted by Favre’s team marks the first time when an organic molecule containing two oxygen atoms are detected in a protoplanetary disk. This, according to the authors of the paper, proves that organic chemistry is very active even though difficult to observe in disks.
More information: First Detection of the Simplest Organic Acid in a Protoplanetary Disk, arXiv:1807.05768 [astro-ph.SR] arxiv.org/abs/1807.05768
Abstract
The formation of asteroids, comets and planets occurs in the interior of protoplanetary disks during the early phase of star formation. Consequently, the chemical composition of the disk might shape the properties of the emerging planetary system. In this context, it is crucial to understand whether and what organic molecules are synthesized in the disk. In this Letter, we report the first detection of formic acid (HCOOH) towards the TW Hydrae protoplanetary disk. The observations of the trans-HCOOH 6(1,6)−5(1,5) transition were carried out at 129~GHz with ALMA. We measured a disk-averaged gas-phase t-HCOOH column density of ∼ (2-4)×1012~cm−2, namely as large as that of methanol. HCOOH is the first organic molecules containing two oxygen atoms detected in a protoplanetary disk, a proof that organic chemistry is very active even though difficult to observe in these objects. Specifically, this simplest acid stands as the basis for synthesis of more complex carboxylic acids used by life on Earth.
Serious quantum computers are finally here. What are we going to do with them? Will Knight explains.
Steampunk chandelier? No. The IBM Q is one of the world’s most advanced quantum computers.
GRAHAM CARLOW
Inside a small laboratory in lush countryside about 80 kilometres north of New York City, an elaborate tangle of tubes and electronics dangles from the ceiling. This mess of equipment is a computer. Not just any computer, but one on the verge of passing what may, perhaps, go down as one of the most important milestones in the history of the field.
Quantum computers promise to run calculations far beyond the reach of any conventional supercomputer. They might revolutionise the discovery of new materials by making it possible to simulate the behaviour of matter down to the atomic level. Or they could upend cryptography and security by cracking otherwise invincible codes. There is even hope they will supercharge artificial intelligence by crunching through data more efficiently.
Yet only now, after decades of gradual progress, are researchers finally close to building quantum computers powerful enough to do things that conventional computers cannot. It’s a landmark somewhat theatrically dubbed ‘quantum supremacy’. Google has been leading the charge toward this milestone, while Intel and Microsoft also have significant quantum efforts. And then there are well-funded startups including Rigetti Computing, IonQ and Quantum Circuits.
No other contender can match IBM’s pedigree in this area, though. Starting 50 years ago, the company produced advances in materials science that laid the foundations for the computer revolution. Which is why, last October, I found myself at IBM’s Thomas J. Watson Research Center to try to answer these questions: What, if anything, will a quantum computer be good for? And can a practical, reliable one even be built?
Why we think we need a quantum computer
GRAHAM CARLOW
The research center, located in Yorktown Heights, looks a bit like a flying saucer as imagined in 1961. It was designed by the neo-futurist architect Eero Saarinen and built during IBM’s heyday as a maker of large mainframe business machines. IBM was the world’s largest computer company, and within a decade of the research centre’s construction it had become the world’s fifth-largest company of any kind, just behind Ford and General Electric.
While the hallways of the building look out onto the countryside, the design is such that none of the offices inside have any windows. It was in one of these cloistered rooms that I met Charles Bennett. Now in his 70s, he has large white sideburns, wears black socks with sandals and even sports a pocket protector with pens in it. Surrounded by old computer monitors, chemistry models and, curiously, a small disco ball, he recalled the birth of quantum computing as if it were yesterday.
When Bennett joined IBM in 1972, quantum physics was already half a century old, but computing still relied on classical physics and the mathematical theory of information that Claude Shannon had developed at MIT in the 1950s. It was Shannon who defined the quantity of information in terms of the number of ‘bits’ (a term he popularised but did not coin) required to store it. Those bits, the 0s and 1s of binary code, are the basis of all conventional computing.
A year after arriving at Yorktown Heights, Bennett helped lay the foundation for a quantum information theory that would challenge all that. It relies on exploiting the peculiar behaviour of objects at the atomic scale. At that size, a particle can exist ‘superposed’ in many states (e.g., many different positions) at once. Two particles can also exhibit ‘entanglement’, so that changing the state of one may instantaneously affect the other.
Bennett and others realised that some kinds of computations that are exponentially time consuming, or even impossible, could be efficiently performed with the help of quantum phenomena. A quantum computer would store information in quantum bits, or qubits. Qubits can exist in superpositions of 1 and 0, and entanglement and a trick called interference can be used to find the solution to a computation over an exponentially large number of states. It’s annoyingly hard to compare quantum and classical computers, but roughly speaking, a quantum computer with just a few hundred qubits would be able to perform more calculations simultaneously than there are atoms in the known universe.
Charles Bennett was one of the pioneers who realised quantum computers could solve some problems exponentially faster than conventional computers.
BARTEK SADOWSKI
In the summer of 1981, IBM and MIT organised a landmark event called the First Conference on the Physics of Computation. It took place at Endicott House, a French-style mansion not far from the MIT campus.
In a photo that Bennett took during the conference, several of the most influential figures from the history of computing and quantum physics can be seen on the lawn, including Konrad Zuse, who developed the first programmable computer, and Richard Feynman, an important contributor to quantum theory. Feynman gave the conference’s keynote speech, in which he raised the idea of computing using quantum effects. “The biggest boost quantum information theory got was from Feynman,” Bennett told me. “He said, ‘Nature is quantum, goddamn it! So if we want to simulate it, we need a quantum computer.’”
IBM’s quantum computer – one of the most promising in existence – is located just down the hall from Bennett’s office. The machine is designed to create and manipulate the essential element in a quantum computer: the qubits that store information.
The gap between the dream and the reality
The IBM machine exploits quantum phenomena that occur in superconducting materials. For instance, sometimes current will flow clockwise and counterclockwise at the same time. IBM’s computer uses superconducting circuits in which two distinct electromagnetic energy states make up a qubit.
The superconducting approach has key advantages. The hardware can be made using well-established manufacturing methods, and a conventional computer can be used to control the system. The qubits in a superconducting circuit are also easier to manipulate and less delicate than individual photons or ions.
Inside IBM’s quantum lab, engineers are working on a version of the computer with 50 qubits. You can run a simulation of a simple quantum computer on a normal computer, but at around 50 qubits it becomes nearly impossible.
That means IBM is theoretically approaching the point where a quantum computer can solve problems a classical computer cannot: in other words, quantum supremacy.
But as IBM’s researchers will tell you, quantum supremacy is an elusive concept. You would need all 50 qubits to work perfectly, when in reality quantum computers are beset by errors that need to be corrected. It is also devilishly difficult to maintain qubits for any length of time; they tend to ‘decohere’, or lose their delicate quantum nature, much as a smoke ring breaks up at the slightest air current. And the more qubits, the harder both challenges become.
The cutting-edge science of quantum computing requires nanoscale precision mixed with the tinkering spirit of home electronics. Researcher Jerry Chow is here shown fitting a circuitboard in the IBM quantum research lab.
JON SIMON
“If you had 50 or 100 qubits and they really worked well enough, and were fully error-corrected – you could do unfathomable calculations that can’t be replicated on any classical machine, now or ever,” says Robert Schoelkopf, a Yale professor and founder of a company called Quantum Circuits. “The flip side to quantum computing is that there are exponential ways for it to go wrong.”
Another reason for caution is that it isn’t obvious how useful even a perfectly functioning quantum computer would be. It doesn’t simply speed up any task you throw at it; in fact, for many calculations, it would actually be slower than classical machines. Only a handful of algorithms have so far been devised where a quantum computer would clearly have an edge. And even for those, that edge might be short-lived. The most famous quantum algorithm, developed by Peter Shor at MIT, is for finding the prime factors of an integer. Many common cryptographic schemes rely on the fact that this is hard for a conventional computer to do. But cryptography could adapt, creating new kinds of codes that don’t rely on factorisation.
This is why, even as they near the 50-qubit milestone, IBM’s own researchers are keen to dispel the hype around it. At a table in the hallway that looks out onto the lush lawn outside, I encountered Jay Gambetta, a tall, easygoing Australian who researches quantum algorithms and potential applications for IBM’s hardware. “We’re at this unique stage,” he said, choosing his words with care. “We have this device that is more complicated than you can simulate on a classical computer, but it’s not yet controllable to the precision that you could do the algorithms you know how to do.”
What gives the IBMers hope is that even an imperfect quantum computer might still be a useful one.
Gambetta and other researchers have zeroed in on an application that Feynman envisioned back in 1981. Chemical reactions and the properties of materials are determined by the interactions between atoms and molecules. Those interactions are governed by quantum phenomena. A quantum computer can – at least in theory – model those in a way a conventional one cannot.
Last year, Gambetta and colleagues at IBM used a seven-qubit machine to simulate the precise structure of beryllium hydride. At just three atoms, it is the most complex molecule ever modelled with a quantum system. Ultimately, researchers might use quantum computers to design more efficient solar cells, more effective drugs or catalysts that turn sunlight into clean fuels.
Those goals are a long way off. But, Gambetta says, it may be possible to get valuable results from an error-prone quantum machine paired with a classical computer.
COSMOS MAGAZINE
Physicist’s dream to engineer’s nightmare
“The thing driving the hype is the realisation that quantum computing is actually real,” says Isaac Chuang, a lean, soft-spoken MIT professor. “It is no longer a physicist’s dream – it is an engineer’s nightmare.”
Chuang led the development of some of the earliest quantum computers, working at IBM in Almaden, California, during the late 1990s and early 2000s. Though he is no longer working on them, he thinks we are at the beginning of something very big – that quantum computing will eventually even play a role in artificial intelligence.
But he also suspects that the revolution will not really begin until a new generation of students and hackers get to play with practical machines. Quantum computers require not just different programming languages but a fundamentally different way of thinking about what programming is. As Gambetta puts it: “We don’t really know what the equivalent of ‘Hello, world’ is on a quantum computer.”
We are beginning to find out. In 2016 IBM connected a small quantum computer to the cloud. Using a programming tool kit called QISKit, you can run simple programs on it; thousands of people, from academic researchers to schoolkids, have built QISKit programs that run basic quantum algorithms. Now Google and other companies are also putting their nascent quantum computers online. You can’t do much with them, but at least they give people outside the leading labs a taste of what may be coming.
The startup community is also getting excited. A short while after seeing IBM’s quantum computer, I went to the University of Toronto’s business school to sit in on a pitch competition for quantum startups. Teams of entrepreneurs nervously got up and presented their ideas to a group of professors and investors. One company hoped to use quantum computers to model the financial markets. Another planned to have them design new proteins. Yet another wanted to build more advanced AI systems. What went unacknowledged in the room was that each team was proposing a business built on a technology so revolutionary that it barely exists. Few seemed daunted by that fact.
This enthusiasm could sour if the first quantum computers are slow to find a practical use. The best guess from those who truly know the difficulties –people like Bennett and Chuang – is that the first useful machines are still several years away. And that’s assuming the problem of managing and manipulating a large collection of qubits won’t ultimately prove intractable.
Still, the experts hold out hope. When I asked him what the world might be like when my two-year-old son grows up, Chuang, who learned to use computers by playing with microchips, responded with a grin. “Maybe your kid will have a kit for building a quantum computer,” he said.
In this new report, we look at the actual planned design as Tesla is bringing the spartan minimalist Model 3 interior design to the Model S and Model X while keeping some more premium features in the more expensive flagship vehicles.
When Tesla introduced the Model S without any buttons and a fairly minimalist interior all focused around the center touchscreen, it was a really polarizing move.
But Tesla stuck to the design and even doubled down with the Model 3’s even more minimalist design.
Tesla is now looking to harmonize the interior of all its vehicles in its lineup and that means going even more minimal with the Model S and Model X, like Tesla did with Model 3.
Electrek obtained some concept images of the new interior design that Tesla is planning to bring to production next year.
To be clear, those images are of the currently planned design refresh, which is still a year from production. The final design could change, but we think it is representative of Tesla’s plan for now:
As you can see, the center touchscreen is now being set up horizontally instead of vertically, like in the current Model S and Model X.
The screen is also bezel-less or has a very small bezel, which is currently a big trend in mobile phones and it is starting to be with bigger screens as well.
Tesla is also bringing a very Model 3-like dash design with the same single vent air conditioning system that runs along the entire dash.
From the images above, it might look that Tesla got rid of the Model S/X’s instrument cluster, but it’s still there – just much smaller and more embedded into the dash.
Here’s a look at it with the steering wheel and center touchscreen removed:
According to documents reviewed by Electrek, Tesla is aiming for the new design to be more geared toward autonomous driving.
Beyond the dash, steering wheel, and center console, Tesla is also planning several other interior upgrades to be introduced in the cabin of the Model S and Model X with this design refresh.
In the documents, Tesla says that it has taken feedback from owners and it is going to introduce a bunch of features to either catch up with the competition in the luxury segments or differentiate the Model S and Model X interior from the Model 3.
They are talking about things like superior materials, softer seat cushions, improved rear seats with second-row console, wireless phone charger, improved front storage and more.
Tesla declined to comment on this report.
Electrek’s Take
Like I said, I know Tesla’s interior is polarizing, but I am personally a big fan of the minimalist look and therefore, I’m more than OK with Tesla doubling down with the design trend for this refresh.
They have taken arguably the best part of the Model 3’s interior, the long dash with the integrated single vent air conditioning system, and brought it to the Model S and Model X interior.
I think that’s going to be a move that will simplify the interior assembly and likely help reduce costs, which Tesla has listed as one of the goals of this design refresh.
Honestly, I am really impressed by Tesla’s approach here because they seem to be really taking the best of both world on several key features. For example, Model S and Model X have an instrument cluster and I think most owners, including myself, would want to keep it, but the Model 3’s lack of instrument cluster resulted in a very cool and clean front view.
Tesla appears to have found an interesting compromise with a seemingly smaller instrument cluster screen lower and more embedded in the dash.
As for the horizontal screen, it should be a welcome change for Tesla’s software and UI team, which has to currently support their interface on both vertical and horizontal screens.
I don’t know if drivers will care much if they still have an instrument cluster.
Tesla did mention in the documents that the new design is “geared toward autonomy” and a large horizontal center screen could be part of that. I could see passengers watching videos on the screen if the Model S and Model X end up being capable of fully autonomous driving at that point.
The automaker also says it wants to use improved materials for the Model S and Model X refresh and offer more color options than in Model 3.
I think they will still offer their bundled interior options and it won’t be difficult to offer more interior options than the Model 3’s only two options.
What do you guys think of Tesla’s upcoming new interior for Model S and Model X?
Brain training app can improve levels of critical cognition-improving neurochemical
A small study has found a brain exercise can increase production of a chemical critical to memory and learning(Credit: SIphotography/Depositphotos)
Over the last few years, a surge of brain-training apps have hit the market purported to do everything from improve memory to staving off the onset of age-related dementia, but outside of anecdotal reports, actual scientific evidence proving many of these claims has been hard to find. A new study from researchers at McGill University in association with commercial company Posit Science has presented fascinating evidence showing how a particular proprietary brain exercise can directly increase production of a chemical critical to memory and learning.
Acetylcholine is a neurotransmitter known to be essential for the brain to effectively process memories and learning. Levels of acetylcholine in the brain have been seen to decrease with age, and concentrations are known to be particularly low in the brains of patients suffering from Alzheimer’s disease.
Currently, an early-stage treatment for patients diagnosed with Alzheimer’s is to deliver what is known as a cholinesterase inhibitor. These drugs block a certain enzyme from breaking down acetylcholine, subsequently increasing levels of the important compound in the brain, and hopefully slowing the onset of cognitive deficits associated with the condition.
This new study set out to find out what effects brain training exercises had on levels of acetylcholine in the brain. The initial, and admittedly very small, study looked at five healthy older adults. The subjects completed around 12 hours of brain training across six weeks using a proprietary program called BrainHQ.
Brain imaging using PET scans were conducted to track acetylcholine levels before and after the experiment. The results were fascinating, with an upregulation of acetylcholine identified in four specific areas of the brain: the right inferior frontal gyrus, left caudate nucleus, bilateral medial prefrontal cortex, and left lingual gyrus/Cuneus. Across these areas improvements in acetylcholine levels of between 16 and 24 percent were identified, alongside behaviorally tracked improvements in sustained attention.
“This is the first confirmation in humans that this more organic strategy can work, leading to higher levels of acetylcholine even in a resting state,” says Michael Merzenich and Kavli Laureate from Posit Science. “Now, we need to perform larger studies in at-risk, pre-dementia, and dementia populations.”
This is undeniably a very small sample set, and as the Posit Science team suggests, a great deal more work needs to be done before a clear conclusion can be generated. However, this does offer up initial evidence of a compelling neurological mechanism that is activated by a brain training exercise. Whether or not this simple mechanism can actively stall the onset of dementia, or even reduce its active symptoms, is yet to be proven.
More general recent studies examining the efficacy of computer-based brain training in relation to age-related cognitive decline have reported a broad variety of results. In specific contexts, some kinds of brain training have been shown to improve cognition in older adults, but many commercial general products have been found to confer no beneficial effect. At what stage in a person’s cognitive decline these types of brain exercises are most useful is also yet to be clearly understood.
While this particular study was ostensibly conducted with the agenda of promoting a specific commercial brain training program, it still offers an interesting insight into how a neurochemical mechanism can be potentially modulated through a simple cognitive exercise.
Smart thermostat company Ecobee has become the focus of a new profile shared online today by CNET, and alongside that article the company has revealed a new money saving feature for select Ecobee users called “Peak Relief.” This feature was created to help users save on energy bills by automatically cutting down on heating and cooling when energy rates are higher, and then using more when rates are lower.
For those in the test, Ecobee says Peak Relief can help customers save an extra 10 percent on heating and cooling bills. Adding in the up to 23 percent of savings that Ecobee already claims from normal use, the company now aims to save customers as much as 33 percent on monthly heating and cooling bills.
Ecobee CEO Stuart Lombard mentioned that the feature was developed over a year and a half and uses artificial intelligence and indoor/outdoor temperature readings to customize settings for each home. This is then combined with time-of-use rates from a utilities provider, which charge customers different prices depending on the time of day versus standard flat rates that go up as users consume more energy.
CNET explains that Peak Relief requires time-of-use utility rates, and while these rates have the potential to cut-down costs it can be difficult to keep track of the higher demand periods, which is where Ecobee’s new feature comes in:
Time-of-use has the potential to save customers money and help utilities avoid spikes in demand. But, many customers have a hard time keeping track of varying time-of-use rates, resulting in less energy savings for utilities and potentially higher costs for customers. Peak Relief may be able to alleviate that problem.
So, how does Peak Relief work? Let’s say you set your thermostat temperature to cool at 74 degrees. With Peak Relief, the thermostat will automatically cool your home to 71 or 72 degrees when rates are lower, then allow the temperature to slowly go back up to 74 when rates increase.
The feature offers two preferences, so your thermostat will either focus more on comfort and stay closer to your temperature settings or focus more on savings and veer a little further away from those settings when rates change.
Peak Relief is rolling out today, but it appears that the test is fairly small and only for “select customers” in California, Arizona, and Ontario, Canada, and again only if those customers are using time-of-use utility rates. However, the company has already said that it plans to roll out Peak Relief to a wide audience “early next year.”
Ecobee’s line of thermostats are part of over two dozen heating and cooling controllers compatible with Apple HomeKit, also including thermostats from Elgato, Honeywell, iDevices, and Netatmo. On Apple.com, customers can buy the Ecobee3 Lite Smart Thermostat, but the company’s latest iteration is the Ecobee4, which includes built-in Alexa support.