Tesla ‘Cyberpunk’ Pickup Truck predictions: Range, towing capacity, and more

Elon Musk’s “Cyberpunk” Tesla Pickup Truck is set to be unveiled this coming November, and the electric vehicle community could not be more excited. Musk, after all, has hyped the vehicle, hinting that it will start at a reasonable price of $49,000 and be the company’s “best product ever.” Tesla has been remarkably good at keeping the truck’s specs secret, which has all but encouraged the EV community to speculate about the upcoming features and specs of the highly-anticipated Tesla Pickup Truck.

Tesla owner-enthusiast Sean Mitchell recently shared his expectations for the upcoming vehicle, and while they are but speculations, they are rooted in information that the electric car maker and CEO Elon Musk have shared in the past. Other speculations are based on Tesla’s current technologies, as well as the company’s recent updates to its operations.

The Tesla Pickup Truck is meant to be a disruptor just like its predecessors like the Model 3 and the Model S. With this in mind, there is a good chance that Tesla will put its best technologies in the vehicle. Mitchell believes that the vehicle will have battery sizes between 150-200 kWh, which should give the truck a range of about 400 miles or more. This is something that Musk himself has mentioned in the past, with the CEO noting that the vehicle will have 400-500 miles of range per charge.

These figures might seem optimistic, but if one were to consider the innovations offered by Maxwell Technologies to Tesla, these specs would be more than plausible. Of course, being a new vehicle, the “Cyberpunk” truck will most definitely be capable of charging at 250 kW using the Supercharger V3 Network. This should allow the upcoming pickup to take advantage of Tesla’s fastest charging solution out of the box.

Since the Tesla Pickup Truck is meant to disrupt, the vehicle will most likely have an industry-leading towing capacity as well. Mitchell estimates that the vehicle will have a 20,000-30,000-lb towing capacity, on account of Elon Musk’s tendency to equip his electric cars with specs that far exceed those of ICE competitors. Seeing as Musk has previously joked that the vehicle could tow 300,000 lbs, a 30,000-lb towing capacity definitely seems feasible.

True to the Tesla brand, the Cybertruck will likely be very powerful as well. The Tesla owner-enthusiast noted that the Silicon Valley-based company will probably leapfrog the competition like Rivian when it comes to acceleration and horsepower; thus, it is possible for the truck to have a sub-3-second 0-60 mph time and about 800-1,000 hp. These specs exceed that of the well-received Rivian R1T all-electric pickup, which will likely beat the Tesla Truck to market.

Mitchell gave an excellent point when it came to the vehicle’s design. During the Tesla Semi’s unveiling, Musk mentioned that the electric car maker is developing a type of Armor Glass that is far more durable and far less prone to breaking. This should enable Tesla to use a generous amount of glass in the Cyberpunk truck’s design, allowing the company to equip the vehicle with a durable panoramic windshield. This does seem to be in line with Musk’s statements about the vehicle being a Blade Runner Cyberpunk truck that looks a bit like an armored personnel carrier from the future.

Watch Sean Mitchell’s recent take on the Tesla Pickup Truck in the video below.

What do you think about these speculations? Are they off base or close? Sound off in the comments below.

Visible light and nanoparticle catalysts produce desirable bioactive molecules

Visible light and nanoparticle catalysts produce desirable bioactive molecules
Molecules adsorb on the surface of semiconductor nanoparticles in very specific geometries. The nanoparticles use energy from incident light to activate the molecules and fuse them together to form larger molecules in configurations useful for biological applications. Credit: Yishu Jiang, Northwestern University

Northwestern University chemists have used visible light and extremely tiny nanoparticles to quickly and simply make molecules that are of the same class as many lead compounds for drug development.

Driven by light, the nanoparticle catalysts perform  with very specific chemical products— that don’t just have the right chemical formulas but also have specific arrangements of their atoms in space. And the catalyst can be reused for additional chemical reactions.

The semiconductor nanoparticles are known as —so small that they are only a few nanometers across. But the  is power, providing the material with attractive optical and  not possible at greater length scales.

“Quantum dots behave more like  than metal nanoparticles,” said Emily A. Weiss, who led the research. “The electrons are squeezed into such a small space that their reactivity follows the rules of quantum mechanics. We can take advantage of this, along with the templating power of the nanoparticle surface.”

This work, published recently by the journal Nature Chemistry, is the first use of a nanoparticle’s surface as a template for a light-driven reaction called a cycloaddition, a simple mechanism for making very complicated, potentially bioactive compounds.

“We use our nanoparticle catalysts to access this desirable class of molecules, called tetrasubstituted cyclobutanes, through simple, one-step reactions that not only produce the molecules in high yield, but with the arrangement of atoms most relevant for drug development,” Weiss said. “These molecules are difficult to make any other way.”

Weiss is the Mark and Nancy Ratner Professor of Chemistry in the Weinberg College of Arts and Sciences. She specializes in controlling light-driven electronic processes in quantum dots and using them to perform light-driven chemistry with unprecedented selectivity.

The nanoparticle catalysts use energy from  to activate molecules on their surfaces and fuse them together to form larger molecules in configurations useful for biological applications. The larger molecule then detaches easily from the nanoparticle, freeing the nanoparticle to be used again in another reaction cycle.

In their study, Weiss and her team used three-nanometer nanoparticles made of the semiconductor cadmium selenide and a variety of starter molecules called alkenes in solution. Alkenes have core carbon-carbon double bonds which are needed to form the cyclobutanes.

The study is titled “Regio- and diastereoselective intermolecular [2+2] cycloadditions photocatalysed by quantum dots.”

Explore further

Overlap allows nanoparticles to enhance light-based detection

More information: Yishu Jiang et al, Regio- and diastereoselective intermolecular [2+2] cycloadditions photocatalysed by quantum dots, Nature Chemistry (2019). DOI: 10.1038/s41557-019-0344-4

Journal information: Nature Chemistry


It has now been a few months since the launch of the Raspberry Pi 4, and it would only be fair to describe the launch as “rocky”. While significantly faster than the Pi 3 on paper, its propensity for overheating would end up throttling down the CPU clock even with the plethora of aftermarket heatsinks and fans. The Raspberry Pi folks have been working on solutions to these teething troubles, and they have now released a bunch of updates in the form of a new bootloader, that lets the Pi 4 live up to its promise. (UPDATE: Here’s the download page and release notes)

The real meat of the update comes in an implementation of a low power mode for the USB hub. It turns out that the main source of heat on the SoC wasn’t the CPU, but the USB. Fixing the USB power consumption means that you can run the processor cool at stock speeds, and it can even be overclocked now.

There is also a new tool for updating the Pi bootloader, rpi-eeprom, that allows automatic updates for Pi 4 owners. The big change is that booting the Pi 4 over the network or an attached USB device is now a possibility, which is a must if you’re installing the Pi permanently. There are some fixes that caused problems with certain HATs, in which the Pi 4’s 3.3 V line was cycled during a reboot.

With a device as complex as a Raspberry Pi it comes as no surprise that it might ship with a few teething troubles. We’ve already covered some surrounding the USB-C power, for example. And the overheating. Where the Pi people consistently deliver though is in terms of support, both official and from the community, and we’re very pleased to see them come through in this case too.

Today, companies detect insurance fraud using a combination of claim analysis, computer programs and private investigators. The FBI estimates the total cost of non-healthcare-related insurance fraud to be around $40 billion per year. But a maturing emerging technology called emotion artificial intelligence (AI) might make it possible to detect insurance fraud based on audio analysis of the caller.

In addition to catching fraud, this technology can improve customer experience by tracking happiness, more accurately directing callers, enabling better diagnostics for dementia, detecting distracted drivers, and even adapting education to a student’s current emotional state.

Though still relatively new, emotion AI is one of 21 new technologies added to the Gartner Hype Cycle for Emerging Technologies, 2019. The 2019 Hype Cycle highlights the emerging technologies with significant impact on business, society and people over the next five to 10 years. Technology innovation is the key to competitive differentiation and is transforming many industries.

This year’s emerging technologies fall into five major trends: Sensing and mobility, augmented human, postclassical compute and comms, digital ecosystems, and advanced AI and analytics.

Sensing and mobility

This trend features technologies with increasingly enabled mobility and the ability to manipulate objects around them, including 3D sensing cameras and more advanced autonomous driving. As sensors and AI evolve, autonomous robots will gain better awareness of the world around them. For example, emerging technologies such as light cargo delivery drones (both flying and wheeled) will be better able to navigate situations and manipulate objects. This technology is currently hampered by regulations, but its functionality continues is continuing to advance.

As sensing technology continues to evolve, it will aid more advanced technologies like the Internet of Things (IoT). These sensors also collect abundant data, which can lead to insights that are applicable across a range of scenarios and industries.

Other technologies in this trend include: AR cloud, autonomous driving levels 4 and 5, and flying autonomous vehicles.

Augmented human

Augmented human technologies improve both the cognitive and physical parts of the human body by including technologies such as biochips and emotion AI. Some will provide “superhuman capabilities” — for example, a prosthetic arm that exceeds the strength of a human arm — while others will create robotic skin that is as sensitive to touch as human skin. These technologies will also eventually provide a more seamless experience that improves the health, intelligence and strength of humans.

Other technologies in this trend include: Personification, augmented intelligence, immersive workspace and biotech (cultured or artificial tissue.)

Postclassical compute and comms

Classical or binary computing, which uses binary bits, evolved by making changes to existing, traditional architectures. These changes resulted in faster CPUs, denser memory and increasing throughput.

Post-classical computations and communications are using entirely new architectures, as well as incremental advancements. This includes 5G, the next-generation cellular standards, which has a new architecture that includes core slicing and wireless edge. These advancements allow low-earth-orbit (LEO) satellites to operate at much lower altitudes, around 1,200 miles or less, than traditional geostationary systems at around 22,000 miles. The result is global broadband or narrowband voice and data network services, including areas with little or no existing terrestrial or satcom coverage.

Technologies in this trend include: Next-generation memory and nanoscale 3D printing.

Digital ecosystems

Digital ecosystems are web-like connections between actors (enterprises, people and things) sharing a digital platform. These ecosystems developed as digitalization morphed traditional value chains, enabling more seamless, dynamic connections to a variety of agents and entities across geographies and industries. In the future these will include decentralized autonomous organizations (DAOs), which operate independently of humans and rely on smart contracts. These digital ecosystems are constantly evolving and connecting, resulting in new products and opportunities.

Other technologies in this trend include: DigitalOps, knowledge graphs, synthetic data and decentralized web.

Advanced AI and analytics

Advanced analytics is the autonomous or semi-autonomous examination of data or content using sophisticated tools beyond those of traditional business insights. This is the result of new classes of algorithms and data science that are leading to new capabilities, for example transfer learning, which uses previously trained machine learning models as advanced starting points for new technology. Advanced analytics enables deeper insights, predictions and recommendations.

Other technologies in this trend include: Adaptive machine learning, edge AI, edge analytics, explainable AI, AI PaaS, generative adversarial networks and graph analytics.


About the Hype Cycle

The Hype Cycle for emerging technologies distills insights from more than 2,000 technologies that Gartner profiles into a succinct set of must-know emerging technologies and trends. With a focus on emerging tech, this Hype Cycle is heavily weighted on those trends appearing in the first half of the cycle. This year, Gartner refocused the Hype Cycle to shift toward introducing new technologies not previously highlighted in past iterations of this Hype Cycle. These technologies are still important, but some have become integral to business operations and are no longer “emerging” and others have been featured for multiple years.

is a Research Vice President for Enterprise Architecture and Technology Innovation with more than 20 years of experience. Mr. Burke’s research focuses primarily on enterprise architecture, emerging technologies and innovation management. He is the chairperson for Gartner 2019 IT Symposium/Xpo in South Africa and he is the author of the 2014 book “Gamify: How Gamification Motivates People to Do Extraordinary Things.”

Alexa can use smart lights to wake you or lull you to sleep

You can also set lighting routines to brighten or dim your lights.

It’s getting a bit easier to fall asleep or wake up in sync with your lights — if you have an Alexa-powered device. Amazon has introduced a trio of Alexa options that can gradually adjust smart lights to suit your daily habits. Wake-up lighting brightens the bulbs grouped with your Alexa device when you tell the voice assistant to set an alarm “with lights.” You can add lights to sleep timers if you want them to gradually dim as you call it a night. And if you want Alexa to gradually change lighting as part of a larger action, you can add brightening or dimming bulbs to routines — say, a morning routine that plays the news and ramps up the lights as you struggle to get out of bed.

The features should start reaching American users this week. This kind of control isn’t unique in the smart light world — the Hue app has had features like this for a while. It’s relatively uncommon for voice assistants, though, and it’s much simpler (if not as advanced) to speak a command when you’re going to sleep.

Mind-reading technology: The security, privacy and inequality threats we will face

Brain computer interface technology is developing fast. But just because we can read data from others’ minds, should we?


Since the dawn of humanity, the only way for us to share our thoughts has been to take some kind of physical action: to speak, to move, to type out an ill-considered tweet.

Brain computer interfaces (BCIs), while still in their infancy, could offer a new way to share our thoughts and feelings directly from our minds through (and maybe with) computers. But before we go any further with this new generation of mind-reading technology, do we understand the impact it will have? And should we be worried?

Depending on who you listen to, the ethical challenges of BCIs are unprecedented, or they’re just a repeat of the risks brought about by each previous generation of technology. Due to the so-far limited use of BCIs in the real world, there’s little practical experience to show which attitude is more likely to be the right one.

The future of privacy

It’s clear that some ethical challenges that affect earlier technologies will carry across to BCIs, with privacy being the most obvious.

We already know it’s annoying to have a user name and password hacked, and worrying when it’s your bank account details that are stolen. But BCIs could mean that eventually it’s your emotional responses that would be stolen and shared by hackers, with all the embarrassments and horrors that go with that.

BCIs offer access to the most personal of personal data: inevitably they’ll be targeted by hackers and would-be blackmailers; equally clearly, security systems will attempt to keep data from BCIs as locked down as possible. And we already know the defenders never win every time.

One reason for some optimism: there will also be our own internal privacy processes to supplement security, says Rajesh Rao, professor at the University of Washington’s Paul G. Allen School of Computer Science & Engineering.

“There’s going to be multiple protective layers of security, as well as your brain’s own mechanisms for security — we have mechanisms for not revealing everything we’re feeling through language right now. Once you have these types of technologies, the brain would have its own defensive mechanisms which could come into play,” he told ZDNet.

The military mind

Another big issue; like generations of new technology from the internet to GPS, some of the funding behind BCI projects has come from the military.

As well as helping soldiers paralysed by injuries in battle regain the abilities they’ve lost, it seems likely that military’s interest in BCIs will lead to the development of systems designed to augment humans’ capabilities. For a soldier, that might mean the chance to damp down fear in the face of an enemy, or patch-in a remote team to help out in the field — even connect to an AI to advise on battle tactics. In battle, having better tech than the enemy is seen as an advantage and a military priority.

There are also concerns that military involvement in BCIs could lead to brain computer interfaces being used as interrogation devices, potentially being used to intrude on the thoughts of enemy combatants captured in battle.

The one percent get smarter

If the use of BCIs in the military is controversial, the use of the technology in the civilian world is similarly problematic.

Is it fair for a BCI-equipped person with access to external computing power and memory to compete for a new job against a standard-issue person? And given the steep cost of BCIs, will they just create a new way for the privileged few to beat down the 99 percent?

These technologies are likely to throw up a whole new set of social justice issues around who gets access to devices that can allow them to learn faster or have better memories.

“You have a new set of problems in terms of haves and have nots,” says Rao.

This is far from the only issue this technology could create. While most current-generation BCIs can read thoughts but not send information back into the brain – future generation BCIs may well be able to both send and receive data.

The effect of having computer systems wirelessly or directly transmit data to the brain isn’t known, but related technologies such as deep brain stimulation — where electrical impulses are sent into brain tissue to regulate unwanted movement in medical conditions such as dystonias and Parkinson’s disease  (though the strength of the link is still a matter of debate).

And even if BCIs did cause personality changes, would that really be a good enough reason to withhold them from someone who needs one — a person with paraplegia who requires an assistive device, for example?

As one research paper in the journal puts it: “the debate is not so much over whether BCI will cause identity changes, but over whether those changes in personal identity are a problem that should impact technological development or access to BCI”.

Whether regular long-term use of BCIs will ultimately effect users’ moods or personalities isn’t known, but it’s hard not to imagine that technology that plugs the brain into an AI or internet-level repository of data won’t ultimately have an effect on personhood.

Historically, the bounds of a person were marked by their skin; where does ‘me’ start with a brain that’s linked up to an artificial intelligence programme, where do ‘I’ end when my thoughts are linked to vast swathes of processing power?

It’s not just a philosophical question, it’s a legal one too. In a world where our brains may be directly connected to an AI, what happens if I break the law, or just make a bad decision that leaves me in hospital or in debt?

The corporate brain drain

And another legal front that will open up around BCI tech could pit employees against employer.

There are already legal protections built up around how physical and intellectual property are handled when an employee works for and leaves a company. But what about if a company doesn’t want the skills and knowledge a worker built up during their employment to leave in their head when they leave the building?

Dr S Matthew Liao, professor of bioethics at New York University, points out that it’s common for a company to ask for a laptop of phone back when you leave a job. But what if you had an implant in your brain that recorded data?

“The question is now, do they own that data, and can they ask for it back? Can they ask for it back — every time you leave work, can they erase it and put it back in the next morning?”

Bosses and workers may also find themselves at odds in other ways with BCIs. In a world where companies can monitor what staff do on their work computers or put cameras across the office in the name of maximum efficiency, what might future employers do with the contents of their BCIs? Would they be tempted to tap into the readings from a BCI to see just how much time a worker really spends working? Or just to work out who keeps stealing all the pens out of the stationery cupboard?

“As these technologies get more and more pervasive and invasive, we might need to read to rethink our rights in the workplace,” says Liao. “Do we have a right to mental privacy?”

Privacy may be the most obvious ethical concern around BCIs, but it’s for good reason: we want our thoughts to remain private, not just for our own benefit, but for others’ as well.

Who hasn’t told a lie to spare someone’s feelings, or thought cheerfully about doing someone harm, safe in the knowledge they have no intention of ever doing so? Who wouldn’t be horrified if they knew every single thought that their partner, child, parent, teacher, boss, or friend thought?

“If we were all able to see each other’s thoughts, it would be really bad – there wouldn’t be any society left,” said Liao.

If BCIs are to spread, perhaps the most important part of using ‘mind-reading’ systems is to know when to leave others’ thoughts well alone.

Scientists Demonstrate Direct Brain-to-Brain Communication in Humans

Work on an “Internet of brains” takes another step

By Robert Martone on 

The new paper addressed some of these questions by linking together the brain activity of a small network of humans. Three individuals sitting in separate rooms collaborated to correctly orient a block so that it could fill a gap between other blocks in a video game. Two individuals who acted as “senders” could see the gap and knew whether the block needed to be rotated to fit. The third individual, who served as the “receiver,” was blinded to the correct answer and needed to rely on the instructions sent by the senders.

The two senders were equipped with electroencephalographs (EEGs) that recorded their brain’s electrical activity. Senders were able to see the orientation of the block and decide whether to signal the receiver to rotate it. They focused on a light flashing at a high frequency to convey the instruction to rotate or focused on one flashing at a low frequency to signal not to do so. The differences in the flashing frequencies caused disparate brain responses in the senders, which were captured by the EEGs and sent, via computer interface, to the receiver. A magnetic pulse was delivered to the receiver using a transcranial magnetic stimulation (TMS) device if a sender signaled to rotate. That magnetic pulse caused a flash of light (a phosphene) in the receiver’s visual field as a cue to turn the block. The absence of a signal within a discrete period of time was the instruction not to turn the block.

After gathering instructions from both senders, the receiver decided whether to rotate the block. Like the senders, the receiver was equipped with an EEG, in this case to signal that choice to the computer.  Once the receiver decided on the orientation of the block, the game concluded, and the results were given to all three participants. This provided the senders with a chance to evaluate the receiver’s actions and the receiver with a chance to assess the accuracy of each sender.

The team was then given a second chance to improve its performance. Overall, five groups of individuals were tested using this network, called the “BrainNet,” and, on average, they achieved greater than 80 percent accuracy in completing the task.

In order to escalate the challenge, investigators sometimes added noise to the signal sent by one of the senders. Faced with conflicting or ambiguous directions, the receivers quickly learned to identify and follow the instructions of the more accurate sender. This process emulated some of the features of “conventional” social networks, according to the report.

This study is a natural extension of work previously done in laboratory animals. In addition to the work linking together rat brains, Nicolelis’s laboratory is responsible for linking multiple primate brains into a “Brainet” (not to be confused with the BrainNet discussed above), in which the primates learned to cooperate in the performance of a common task via brain-computer interfaces (BCIs). This time, three primates were connected to the same computer with implanted BCIs and simultaneously tried to move a cursor to a target. The animals were not directly linked to each other in this case, and the challenge was for them to perform a feat of parallel processing, each directing its activity toward a goal while continuously compensating for the activity of the others.

Brain-to-brain interfaces also span across species, with humans using noninvasive methods similar to those in the BrainNet study to control cockroaches or rats that had surgically implanted brain interfaces. In one report, a human using a noninvasive brain interface linked, via computer, to the BCI of an anesthetized rat was able to move the animal’s tail. While in another study, a human controlled a rat as a freely moving cyborg.

The investigators in the new paper point out that it is the first report in which the brains of multiple humans have been linked in a completely noninvasive manner. They claim that the number of individuals whose brains could be networked is essentially unlimited. Yet the information being conveyed is currently very simple: a yes-or-no binary instruction. Other than being a very complex way to play a Tetris-like video game, where could these efforts lead?

The authors propose that information transfer using noninvasive approaches could be improved by simultaneously imaging brain activity using functional magnetic resonance imaging (fMRI) in order to increase the information a sender could transmit. But fMRI is not a simple procedure, and it would expand the complexity of an already extraordinarily complex approach to sharing information. The researchers also propose that TMS could be delivered, in a focused manner, to specific brain regions in order to elicit awareness of particular semantic content in the receiver’s brain.

Meanwhile the tools for more invasive—and perhaps more efficient—brain interfacing are developing rapidly. Elon Musk recently announced the development of a robotically implantable BCI containing 3,000 electrodes to provide extensive interaction between computers and nerve cells in the brain. While impressive in scope and sophistication, these efforts are dwarfed by government plans. The Defense Advanced Research Projects Agency (DARPA) has been leading engineering efforts to develop an implantable neural interface capable of engaging one million nerve cells simultaneously. While these BCIs are not being developed specifically for brain–to-brain interfacing, it is not difficult to imagine that they could be recruited for such purposes.

Even though the methods used here are noninvasive and therefore appear far less ominous than if a DARPA neural interface had been used, the technology still raises ethical concerns, particularly because the associated technologies are advancing so rapidly. For example, could some future embodiment of a brain-to-brain network enable a sender to have a coercive effect on a receiver, altering the latter’s sense of agency? Could a brain recording from a sender contain information that might someday be extracted and infringe on that person’s privacy? Could these efforts, at some point, compromise an individual’s sense of personhood?

This work takes us a step closer to the future Nicolelis imagined, in which, in the words of the late Nobel Prize–winning physicist Murray Gell-Man, “thoughts and feelings would be completely shared with none of the selectivity or deception that language permits.” In addition to being somewhat voyeuristic in this pursuit of complete openness, Nicolelis misses the point. One of the nuances of human language is that often what is not said is as important as what is. The content concealed in privacy of one’s mind is the core of individual autonomy. Whatever we stand to gain in collaboration or computing power by directly linking brains may come at the cost of things that are far more important.

Are you a scientist who specializes in neuroscience, cognitive science, or psychology? And have you read a recent peer-reviewed paper that you would like to write about? Please send suggestions to Mind Matters editor Gareth Cook. Gareth, a Pulitzer prize-winning journalist, is the series editor of Best American Infographics and can be reached at garethideas AT or Twitter @garethideas.

Chinese researchers hail Google’s quantum computing breakthrough, call for more funds to catch up to US

  • Chinese researchers working on 50-bit quantum computing technology are expected to achieve ‘quantum superiority’ by the end of next year
  • While Google and Chinese scientists celebrated the breakthrough, American rivals including IBM and Intel cast doubt over the claims


A Sycamore chip mounted on the printed circuit board during the packaging process. Photo: AFP
A Sycamore chip mounted on the printed circuit board during the packaging process. Photo: AFP

Chinese scientists have applauded Google’s claim of a breakthrough in quantum computing despite doubts from its American rivals, calling for continuous investment so they do not fall further behind the US in a field that promises to render supercomputers obsolete.

Sycamore, Google’s 53-bit quantum computer, performed a calculation in 200 seconds that would take the world’s fastest supercomputer, the IBM Summit, 10,000 years to perform, according to a blog post by Google that was also published in Nature magazine last Wednesday. With Sycamore, Google claims to have reached quantum supremacy, the point where a quantum computer can perform calculations that surpass anything the most advanced supercomputers today can do.

Guoping Guo, a professor at the University of Science and Technology of China and founder and chief scientist of Chinese start-up Origin Quantum, said the achievement was of “epoch-making significance”.

“Quantum supremacy is the turning point that has proven the superiority of quantum computers over classical computers,” said Guo. “If we fall behind in the next stage of general-purpose quantum computing, it would mean the difference between cold weapons and firearms.”

Google CEO Sundar Pichai with one of the company’s quantum computers. Photo: AFP
Google CEO Sundar Pichai with one of the company’s quantum computers. Photo: AFP

Quantum computers, which take a new approach to processing information, are theoretically capable of making calculations that are orders of magnitude faster than what the world’s most powerful supercomputers can do.

“With this breakthrough we’re now one step closer to applying quantum computing to – for example – design more efficient batteries, create fertiliser using less energy, and figure out what molecules might make effective medicines,” Google chief executive Sundar Pichai wrote in a separate post on Wednesday.

While Google celebrated its breakthrough, rivals including IBM and Intel cast doubt over the claims. IBM said Google did not tap the full power of its Summit supercomputer, which could have processed Google’s calculation in 2.5 days or faster with ideal simulation.

In a statement Intel said quantum practicality is much further down the road.

Regardless of the different spin each company put on the achievement, Guo said the huge gap between 200 seconds and 2.5 days was sufficient for Google to claim quantum supremacy.

Other Chinese researchers in the field pointed to the significance of the new technologies Google used in the experiment rather than the claim of quantum supremacy itself, such as the adjustable coupler used to connect qubits.

“At this stage, the problems that quantum supremacy can solve have no practical value, but [Google] has demonstrated its ability to perform a computation on such a scale of 53 qubits,” said Huang Heliang, a researcher in superconducting quantum computing at the University of Science and Technology of China. “It is foreseeable that it could lead to breakthroughs and applications in fields such as machine learning in the near future.”

A qubit, or quantum bit, is the basic unit of quantum information, similar to the binary bit in classical computing.

Google is among a group of US technology companies as well as Chinese universities and companies racing to develop quantum computers amid intensifying technology competition between the world’s two biggest economies.

China must rein in state firms to gain upper hand in US tech war

China filed almost twice as many patents as the US in 2017 for quantum technology, a category that includes communications and cryptology devices, according to market research firm Patinformatics. The US, however, leads the world in patents relating to the most prized segment of the field – quantum computers – thanks to heavy investment by IBM, Google, Microsoft and others.

Huang, who acknowledged China was still trying to catch up to the US because of a late start, said that falling behind in the current stage may not have a significant impact since quantum technology is still in its early days of development.

“But we have to be aware that the gap could easily widen if we don’t step up support and investment,” he said.

Chinese researchers working on 50-bit quantum computing technology are expected to achieve quantum supremacy by the end of next year, he added.

Pan Jianwei (right) and Lu Chaoyang, two leading scientists in China's quantum computing industry. Photo: Handout
Pan Jianwei (right) and Lu Chaoyang, two leading scientists in China’s quantum computing industry. Photo: Handout

The US was pouring US$200 million a year into research into the field, according to a 2016 government report.

Guo believes China is three to five years behind the US and Europe in breakthroughs, talent acquisition and other areas. He said the gap could be widening because most scientists in the field are going through the phase of publishing papers on basic research. The all-important application research phase was still burning through funding without any hope of commercialisation on the horizon, he added.

China has been stepping up efforts for its quantum ambitions in recent years but does not reveal total investment funding. Under the country’s 13th five-year plan introduced in 2016, Beijing launched a “megaproject” for quantum communications and computing which aimed to achieve breakthroughs by 2030.

In 2017 China started building the world’s largest quantum research facility in Hefei, central China’s Anhui province, with the goal of developing a quantum computer. The National Laboratory for Quantum Information Sciences is a US$10 billion project due to open in 2020.

Chinese tech giants, including Baidu, Alibaba, Tencent – collectively known as BAT – and telecoms giant Huawei Technologies, have also recruited some of the country’s top scientists and set up labs for the development of quantum technologies.

For more insights into China tech, sign up for our 

tech newsletters

, subscribe to our 

Inside China Tech podcast

, and download the comprehensive 

2019 China Internet Report

. Also roam 

China Tech City

, an award-winning interactive digital map at our sister site 



Discover: Meet the Sudbury scientist who feeds minerals to microbesOct 22, 2019 12:00 Science is self-correcting. Every mistake that is made and corrected deepens our understanding of the world around us, Dr. Thomas Merritt of Laurentian University tells us



I’m a geneticist. I study the connection between information and biology — essentially what makes a fly a fly, and a human a human. Interestingly, we’re not that different. It’s a fantastic job and I know, more or less, how lucky I am to have it.

I’ve been a professional geneticist since the early 1990s. I’m reasonably good at this, and my research group has done some really good work over the years. But one of the challenges of the job is coming to grips with the idea that much of what we think we “know” is, in fact, wrong.

Sometimes, we’re just off a little, and the whole point of a set of experiments is simply trying to do a little better, to get a little closer to the answer. At some point, though, in some aspect of what we do, it’s likely that we’re just flat out wrong. And that’s okay. The trick is being open-minded enough, hopefully, to see that someday, and then to make the change.

One of the amazing things about being a modern geneticist is that, generally speaking, people have some idea of what I do: work on DNA (deoxyribonucleic acid). When I ask a group of school kids what a gene is, the most common answer is “DNA.” And this is true, with some interesting exceptions. Genes are DNA and DNA is the information in biology.

For almost 100 years, biologists were certain that the information in biology was found in proteins and not DNA, and there were geneticists who went to the grave certain of this. How they got it wrong is an interesting story.

Genetics, microscopy (actually creating the first microscopes), and biochemistry were all developing together in the late 1800s. Not surprisingly, one of the earliest questions that fascinated biologists was how information was carried from generation to generation. Offspring look like their parents, but why? Why your second daughter looks like the postman is a question that came up later.

Early cell biologists were using the new microscopes to peer into the cell in ways that simply hadn’t been possible previously. They were finding thread-like structures in the interior of cells that passed from generation to generation, were similar within a species, but different between them. We now know these threads as chromosomes. Could these hold the information that scientists were looking for?

Advances in biochemistry paralleled those in microscopy and early geneticists determined that chromosomes were primarily made up of two types of molecules: proteins and DNA. Both are long polymers (chains) made up of repeated monomers (links in the chains). It seemed very reasonable that these chains could contain the information of biological complexity.

By analogy, think of a word as just a string of letters, a sentence as a chain of words, and a paragraph as a chain of sentences. We can think of chromosomes, then, as chapters, and all of our genetic information — what we now call our genome (all our genetic material) — as these chapters that make up a novel. The question to those early geneticists, then, was: Which string made up the novel? Was it protein or DNA?

You and I know the answer: DNA. Early geneticists, however, got it wrong and then passionately defended this wrong stance for eight decades. Why? The answer is simple. Protein is complicated. DNA is simple. Life is complicated. The alphabet of life, then, should be complicated — and protein fits that.

Proteins are made up of 20 amino acids — there are 20 different kinds of links in the protein chain. DNA is made up of only four nucleotides — there are only four different links in the DNA chain. Given the choice between a complicated alphabet and a simple one, the reasonable choice was the complicated one, namely protein. But, biology doesn’t always follow the obvious path and the genetic material was, and is, DNA.

It took decades of experiments to disprove conventional wisdom and convince most people that biological information was in DNA. For some, it took James Watson and Francis Crick (, using data misappropriated from Rosalind Franklin, deciphering the structure of DNA in 1953 to drive the nail in the protein coffin. It just seemed to obvious that protein, with all its complexity, would be the molecule that coded for complexity.

These were some of the most accomplished and thoughtful scientists of their day, but they got it wrong. And that’s okay — if we learn from their mistakes.

It is too easy to dismiss this example as the foolishness of the past. We wouldn’t make this kind of mistake today, would we? I can’t answer that, but let me give you another example that suggests we would, and I’ll argue at the end that we almost certainly are.

I’m an American, and one of the challenges of moving to Canada was having to adapt to overcooked burgers (my mother still can’t accept that she can’t get her burger “medium” when she visits). This culinary challenge is driven by a phenomenon that one of the more interesting recent cases of scientists having it wrong and refusing to see that.

In the late 1980s, cows started wasting away and, in the late stages of what was slowly recognized as a disease, acting in such bizarre manner that their disease, bovine spongiform encephalitis, became known as Mad Cow Disease. Strikingly, the brains of the cows were full of holes (hence “spongiform”) and the holes were caked with plaques of proteins clumped together.

Really strikingly, the proteins were ones that are found in healthy brains, but now in an unnatural shape. Proteins are long chains, but they function because they have complex 3D shapes — think origami. Proteins fold and fold into specific shapes. But, these proteins found in sick cow brains had a shape not normally seen in nature; they were misfolded.

Sometime after, people started dying from the same symptoms and a connection was made between eating infected cows and contracting the disease (cows could also contract the disease, but likely through saliva or direct contact, and not cannibalism). Researchers also determined the culprit was consumption only of neural tissue, brain and spinal tissue, the very tissue that showed the physical effects of infection (and this is important).

One of the challenges of explaining the disease was the time-course from infection to disease to death; it was long and slow. Diseases, we knew, were transmitted by viruses and bacteria, but no scientist could isolate one that would explain this disease. Further, no one knew of other viruses or bacteria whose infection would take this long to lead to death. For various reasons, people leaned toward assuming a viral cause, and careers and reputations were built on finding the slow virus.

In the late 1980s, a pair of British researchers suggested that perhaps the shape, the folding, of the proteins in the plaques was key. Could the misfolding be causing the clumping that led to the plaques? This proposal was soon championed by Stanely Prusiner, a young scientist early in his career.

The idea was simple. The misfolded protein was itself both the result and the cause of the infection. Misfolded protein clumped forming plaques that killed the brain tissue — they also caused correctly folded versions of the proteins to misfold. The concept was straightforward, but completely heretical. Disease, we knew, did not work that way. Diseases are transmitted by viruses or bacteria, but the information is transmitted as DNA (and, rarely, RNA, a closely related molecule). Disease is not transmitted in protein folding (although in 1963 Kurt Vonnegut had predicted such a model for world-destroying ice formation in his amazing book Cat’s Cradle).

For holding this protein-based view of infection, Prusiner was literally and metaphorically shouted out of the room. Then he showed, experimentally and elegantly, that misfolded proteins, which he called “prions,” were the cause of these diseases, of both symptoms and infection.

For this accomplishment, he was awarded the 1997 Nobel Prize in Medicine. He, and others, were right. Science, with a big S, was wrong. And that’s okay. We now know that prions are responsible for a series of diseases in humans and other animals, including Chronic Wasting Disease, the spread of which poses a serious threat to deer and elk here in Ontario.

Circling back, the overcooked burger phenomenon is because of these proteins. If you heat the prions sufficiently, they lose their unnatural shape — all shape actually — and the beef is safe to eat. A well-done burger will guarantee no infectious prions, while a medium one will not. We don’t have this issue in the U.S. because cows south of the border are less likely to have been infected with the prions than their northern counterparts (or at least Americans are willing to pretend this is the case).

Where does this leave us? To me, the take-home message is that we need to remain skeptical, but curious. Examine the world around you with curious eyes, and be ready to challenge and question your assumptions.

Also, don’t ignore the massive things in front of your eyes simply because they don’t fit your understanding of, or wishes for, the world around you. Climate change, for example, is real and will likely make this a more difficult world for our children. I’ve spent a lot of time in my career putting together models of how the biological world works, but I know pieces of these models are wrong.

I can almost guarantee you that I have something as fundamentally wrong as those early geneticists stuck on protein as the genetic material of cells or the prion-deniers; I just don’t know what it is. Yet.

And, this situation is okay. The important thing isn’t to be right. Instead, it is to be open to seeing when you are wrong.

Dr. Thomas Merritt is the Canada Research Chair in Genomics and Bioinformatics at Laurentian University.

The Origin of Consciousness in the Brain Is About to Be Tested


Here’s something you don’t hear every day: two theories of consciousness are about to face off in the scientific fight of the century.

Backed by top neuroscientist theorists of today, including Christof Koch, head of the formidable Allen Institute for Brain Research in Seattle, Washington, the fight hopes to put two rival ideas of consciousness to the test in a $20 million project. Briefly, volunteers will have their brain activity scanned while performing a series of cleverly-designed tasks targeted to suss out the brain’s physical origin of conscious thought. The first phase was launched this week at the Society for Neuroscience annual conference in Chicago, a brainy extravaganza that draws over 20,000 neuroscientists each year.

Both sides agree to make the fight as fair as possible: they’ll collaborate on the task design, pre-register their predictions on public ledgers, and if the data supports only one idea, the other acknowledges defeat.

The “outlandish” project is already raising eyebrows. While some applaud the project’s head-to-head approach, which rarely occurs in science, others question if it’s all a publicity stunt. “I don’t think [the competition] will do what it says on the tin,” said Dr. Anil Seath, a neuroscientist at the University of Sussex in Brighton UK, explaining that the whole trial is too “philosophical.” Rather than unearthing how the brain brings outside stimuli into attention, he said, the fight focuses more on where and why consciousness emerges, with theories growing by the numbers every year.

Then there’s the religion angle. The project is sponsored by the Templeton World Charity Foundation (TWCF), a philanthropic foundation that tiptoes the line between science and faith. Although spirituality isn’t taboo to consciousness theorists—many embrace it—TWCF is a rather unorthodox player in the neuroscientific field.

Despite immediate controversy, the two sides aren’t deterred. “Theories are very flexible. Like vampires, they’re very difficult to slay,” said Koch. Even if the project can somewhat narrow down divergent theories of consciousness, we’re on our way to cracking one of the most enigmatic properties of the human brain.

With the rise of increasingly human-like machines, and efforts to promote communications with locked-in patients, the need to understand consciousness is especially salient. Can AI ever be conscious and should we give them rights? What about people’s awareness during and after anesthesia? How do we reliably measure consciousness in fetuses inside mother’s wombs—a tricky question leveraged in abortion debates—or in animals?

Even if the project doesn’t produce a definitive solution to consciousness, it’ll drive scientists loyal to different theoretical aisles to talk and collaborate—and that in itself is already a laudable achievement.

“What we hope for is a process that reduces the number of incorrect theories,” said TWCF president Andrew Serazin. “We want to reward people who are courageous in their work, and part of having courage is having the humility to change your mind.”

Meet the Contestants

How physical systems give rise to subjective experience is dubbed the “hard problem” of consciousness. Although neuroscientists can measure the crackling of electrical activity among neurons and their networks, no one understands how consciousness emerges from individual spikes. The sense of awareness and self simply can’t be reduced down to neuronal pulses, at least with our current state of understanding. What’s more, what exactly is consciousness? A broad stroke describes it as a capacity to experience something, including one’s own existence, rather than documenting it like an automaton—a vague enough picture that leaves plenty of room for theories to how consciousness actually works.

In all, the project hopes to tackle nearly a dozen top theories of consciousness. But the first two in the boxing ring are also the most prominent: one is the Global Workspace Theory (GWT), championed by Dr. Stanislas Dehaene of the Collège de France in Paris. The other is the Integrated Information Theory (IIT), proposed by Dr. Giulio Tononi of the University of Wisconsin in Madison and backed by Koch.

The GWT describes an almost algorithmic view. Conscious behavior arises when we can integrate and segregate information from multiple input sources—for example, eyes, ears, or internal ruminations—and combine it into a piece of data in a global workspace within the brain. This mental sketchpad forms a bottleneck in conscious processing, in that only items in our attention are available to the entire brain for use—and thus for a conscious experience of it. For another to enter awareness, previous data have to leave.

In this way, the workspace itself “creates” consciousness, and acts as a sort of motivational whip to drive actions. Here’s the crux: according to Dehaene, brain imaging studies in humans suggest that the main “node” exists at the front of the brain, or the prefrontal cortex, which acts like a central processing unit in a computer. It’s algorithmic, input-output based, and—like all computers—potentially hackable.

IIT, in contrast, takes a more globalist view. Consciousness arises from the measurable, intrinsic interconnectedness of brain networks. Under the right architecture and connective features, consciousness emerges. Unlike the GWT, which begins with understanding what the brain does to create consciousness, IIT begins with the awareness of experience—even if it’s just an experience of self rather than something external. When neurons connect in the “right” way under the “right” circumstances, the theory posits, consciousness naturally emerges to create the sensation of experience.

In contrast to GWT, IIT believes this emergent process happens at the back of the brain—here, neurons connect in a grid-like structure that hypothetically should be able to support this capacity. To IIT subscribers, GWT describes a feed-forward scenario that’s similar to digital computers and zombies—entities that act conscious but don’t truly posses the experience. According to Koch, consciousness is rather “a system’s ability to be acted upon by its own state in the past and to influence its own future. The more a system has cause-and-effect power, the more conscious it is.”

The Showdown

To test the ideas, 6 labs across the world will run experiments with over 500 people, using 3 different types of brain recordings as the participants perform various consciousness-related tests. By adopting functional MRI to spot brain metabolic activity, EEG for brain waves and ECoG (a type of EEG with electrodes directly placed on the brain), the trial hopes to gather enough replicable data to assuage even the most skeptical of opposing fields.

For example, one experiment will track the brain’s response as a participant becomes aware of an image: the GWT believes the prefrontal cortex will activate, whereas the IIT says to keep your eyes on the back of the brain.

According to Quanta Magazine, the showdown will get a top journal to commit to publishing the outcomes of the experiments, regardless of the result. In addition, the two main camps are required to publicly register specific predictions, based on their theories, of the results. Neither party will actually collect nor interpret the data to avoid potential conflicts of interest. And ultimately, if the results come back conclusively in favor of one idea, the other will acknowledge defeat.

What the trial doesn’t answer, of course, is how neural computations lead to consciousness. A recent theorybased on thermodynamics in physics, suggests that neural networks in a healthy brain naturally organize together according to energy costs into a sufficient number of connection “microstates” that lead to consciousness. Too many or too few microstates and the brain loses its adaptability, processing powers, and sometimes the ability to keep itself online.

Despite misgivings, TWCF’s Potgieter sees the project as an open, collaborative step forward in a messy domain. It’s “the first time ever that such an audacious, adversarial collaboration has been undertaken and formalized within the field of neuroscience,” he said.

Tononi, the backer of IIT, agrees. “It forces the proponents to focus and enter some common framework. I think we all stand to gain one way or another,” he said.

Shelly Xuelai Fan is a neuroscientist-turned-science writer. She completed her PhD in neuroscience at the University of British Columbia, where she developed novel treatments for neurodegeneration. While studying biological brains, she became fascinated with AI and all things biotech. Following graduation, she moved to UCSF to study blood-based factors that rejuvenate aged brains.