Google Cofounder Sergey Brin Warns of AI’s Dark Side (

Google co-founder Sergey Brin has warned that the current boom in artificial intelligence has created a “technology renaissance” that contains many potential threats. In the company’s annual Founders’ Letter, the Alphabet president struck a note of caution. “The new spring in artificial intelligence is the most significant development in computing in my lifetime,” writes Brin. “Every month, there are stunning new applications and transformative new techniques.” But, he adds, “such powerful tools also bring with them new questions and responsibilities.” From a report:When Google was founded in 1998, Brin writes, the machine learning technique known as artificial neural networks, invented in the 1940s and loosely inspired by studies of the brain, was “a forgotten footnote in computer science.” Today the method is the engine of the recent surge in excitement and investment around artificial intelligence. The letter unspools a partial list of where Alphabet uses neural networks, for tasks such as enabling self-driving cars to recognize objects, translating languages, adding captions to YouTube videos, diagnosing eye disease, and even creating better neural networks.

Brin nods to the gains in computing power that have made this possible. He says the custom AI chip running inside some Google servers is more than a million times more powerful than the Pentium II chips in Google’s first servers. In a flash of math humor, he says that Google’s quantum computing chips might one day offer jumps in speed over existing computers that can be only be described with the number that gave Google its name, a googol, or a 1 followed by 100 zeroes.

As you might expect, Brin expects Alphabet and others to find more uses for AI. But he also acknowledges that the technology brings possible downsides. “Such powerful tools also bring with them new questions and responsibilities,” he writes. AI tools might change the nature and number of jobs, or be used to manipulate people, Brin says — a line that may prompt readers to think of concerns around political manipulation on Facebook. Safety worries range from “fears of sci-fi style sentience to the more near-term questions such as validating the performance of self-driving cars,” Brin writes.

ASI: Will It Save Or Destroy Us?

Artificial Super Intelligence

Although Artificial Super Intelligence (ASI) currently remains the stuff of fiction, many experts believe this field of Artificial Intelligence (AI) – defined by Oxford professor and AI specialist Nick Bostrom as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills” – could become a reality within our lifetimes. In fact, there’s no longer a debate on whether artificial intelligence, which as a branch of computer science is divided into three categories: narrow intelligence, general intelligence, and superintelligence, will become smarter than humans, only when.

But where exactly is ASI headed? Some of the most exciting predictions include the field’s capability to harness technology to overcome diseases like Alzheimer’s and cancer, rewrite our bug-ridden genetic code, eradicate suffering, improve climate science, and perhaps make our species immortal by the middle of the 21-st century. Futurist and singularity enthusiast Ray Kurzweil has pegged 2029 as the date when computer intelligence would reach human levels. After this, Google’s Director of Engineering thinks Singularity – the moment when AI will reach a level that’s “a billion times more powerful than all human intelligence today” – will take place in 2045.

It would be an understatement to say we’re in uncharted territory. Especially, when considering the fact that the spectrum of AI extends much further than we currently imagine. That’s why people like Microsoft co-founder Bill Gates, the late theoretical physicist Stephen Hawking and billionaire entrepreneur Elon Musk think AI could be humanity’s downfall. In fact in 2016 Hawking was quoted as saying that AI could either be the best or worst invention humanity has ever made.

What will happen or what the consequences of the inevitable AI Revolution will be is anyone’s guess, but as Wait, But Why‘s Tim Urban writes: “While most scientists I’ve come across acknowledge that ASI would have the ability to send humans to extinction, many also believe that used beneficially, ASI’s abilities could be used to bring individual humans, and the species as a whole, to…species immortality”.

In other words, success in creating ASI could be the biggest event in the history of our civilization.

Football heading could worsen cognitive functions, reveals study

The findings suggest that efforts to reduce long-term brain injuries may be focusing too narrowly on preventing accidental head collisions.


Canadian researchers create tiny ‘lab on a chip’ to test for infectious diseases

Canadian researchers have created a tiny, portable device they say can do the same work as a larger blood testing lab, in a fraction of the time and cost.

Their newly developed “lab on chip” device can quickly perform blood tests and determine whether people living in areas without access to standard laboratories are at risk for measles and rubella.

The University of Toronto researchers have cheekily named their device “Mr. Box,” or MR (measles, rubella) Box. They say it can do on-the-spot tests to check whether people have developed antibodies against measles and rubella — two diseases that kill or injure hundreds of thousands in developing countries every year.

The device relies on a technology called digital microfluidics. A pinprick drop of blood is placed onto a specially created microchip and then placed into the device, which then moves the blood drop around the chip using electrostatic pulses.

As the blood drop moves, the device looks for the presence of antibodies to measles or rubella, which would indicate whether a person has developed an immunity to the disease, either through previous infection or vaccination.

The researchers recently published test results of Mr. Box in the journal Science Translational Medicine and report that the device is not only simple and inexpensive, it offers results much faster than a traditional lab.

The team tested the device by travelling to the Kakuma refugee camp in northwestern Kenya, which had recently undergone a measles and rubella immunization campaign.

The team tested small blood samples from children and adults to see whether they had developed immunity, and then sent the samples to Kenya’s national laboratory in Nairobi to compare the results.

The low-cost device matched the international laboratory-standard reference tests of the Kenyan Medical Research Institute.

“It was 86 per cent for measles and 84 per cent for rubella…these patients wouldn’t even have those results, because they are in a refugee camp,” said Julian Lamanna, a PhD candidate in chemistry who was part of the research team.

The accuracy of the device will continue to improvewhich could allow quick blood tests to be performed in the field, say researchers.

“Right now, in refugee camps, all tests have to be done several hundred kilometres away… if you can do the test right next to the patient, you can get the information rapidly,”Darius Rackusa post-doctoral researcher in chemistry, told CTV News.

Rackus hopes tht “miniaturizing” disease testing would one day allow testing to be done at airports or other points of entry, to provide fast and inexpensive disease surveillance.

“Just like computer used to take up entire rooms, yet everyone has one in their pocket now, maybe one day, all our lab tests can be done on something that kits in your pocket,” he said.

What’s more, the technology could be adapted to test for other infections, says Lamanna.

“We have a very flexible platform,” he said. “So we are alsoworking on tests for malaria and Zika, which are slightly different in their biochemistry. But you can imagine this flexible architecture being used for testing for a whole variety of different diseases.”

This will allow experts to rapidly mobilize to head off outbreaks before they happen, or be there with boots on the ground after the outbreak starts.

“We simply don’t have it today. This could lead us to that place,” said Aaron Wheeler of the Institute of Biomaterials and Biomedical Engineering at the University of Toronto.

The researchers are planning to make their platform design “open source,” so that others can help improve on it.

Three radical new user interfaces

Holographic videoconferencing, a smart wall, and a smart-watch projector offer new ways to interact with data and each other
April 27, 2018

Holodeck-style holograms could revolutionize videoconferencing

A “truly holographic” videoconferencing system has been developed by researchers at Queen’s University in Montreal. With TeleHuman 2, objects appear as stereoscopic images, as if inside a pod (not a two-dimensional video projected on a flat piece of glass). Multiple users can walk around and view the objects from all sides simultaneously — as in Star Trek’s Holodeck.

Teleporting for distance meetings. TeleHuman 2 “teleports” people live — allowing for meetings at a distance. No headset or 3D glasses required.

The researchers presented the system in an open-access paper at CHI 2018, the ACM CHI Conference on Human Factors in Computing Systems  in Montreal on April 25.

(Left) Remote capture room with stereo 2K cameras, multiple surround microphones, and displays. (Right) Telehuman 2 display and projector (credit: Human Media Lab)









Interactive smart wall acts as giant touch screen, senses electromagnetic activity in room

Researchers at Carnegie Mellon University and Disney Research have devised a system called Wall++ for creating interactive “smart walls” that sense human touch, gestures, and signals from appliances.

By using masking tape and nickel-based conductive paint, a user would create a pattern of capacitive-sensing electrodes on the wall of a room (or a building) and then paint it over. The electrodes would be connected to sensors.

Wall ++ (credit: Carnegie Mellon University)

Acting as a sort of huge tablet, touch-tracking or motion-sensing uses could include dimming or turning lights on/off, controlling speaker volume, acting as smart thermostats, playing full-body video games, or creating a huge digital white board, for example.

A passive electromagnetic sensing mode could also allow for detecting devices that are on or off (by noise signature). And a small, signal-emitting wristband could enable user localization and identification for collaborative gaming or teaching, for example.

The researchers also presented an open-access paper at CHI 2018.

A smart-watch screen on your skin

LumiWatch, another interactive interface out of Carnegie Mellon, projects a smart-watch touch screen onto your skin. It solves the tiny-interface bottleneck with smart watches — providing more than five times the interactive surface area for common touchscreen operations, such as tapping and swiping. It was also presented in an open-access paper at CHI 2018.

Stunning Mars photo shocks the Internet

Stunning Mars photo shocks the Internet

A remarkable photo has conspiracy theorists claiming a carved image of an ancient warrior woman, although that is very unlikely.

We finally have proof that aliens once lived on Mars, claim conspiracy theorists, and it is from this photo clearly depicting a carved image of an ancient warrior race that once lived on Mars. Or, it is just another oddly shaped rock, if you want to believe any scientist you ask about it.

The latest images from NASA’s Mars Curiosity rover caught the attention of Internet sleuths, who claims to have spotted a strange looking statue that looks like the head of some warrior woman. The “statue head” was found in the Gale Crater, where the Curiosity rover is currently conducting its research.

“I have found what seems to be a small feminine looking statue head on Mars in Gale Crater in this recent Curiosity Rover image from NASA. Only a few inches in size or less. It resembles a carved depiction of a female warrior wearing a helmet similar to some found on Earth from the Middle Ages,” Joe White, a space video journalist from Bristol, UK, said according to news reports. “It has a possible emblem on the forehead and some very interesting facial features that look almost Egyptian in artistic style. This is one of hundreds of similar artefacts that I have found on Mars in recent years and may go further to show that there was an ancient and artistic civilisation on Mars in the past.”

Of course, this is not the first time conspiracy theorists claim to have seen alien design in rather mundane geological features. There are literally dozens of other images where Martian believers have extrapolated an alien cause for what can easily be explained as natural variation in rocks.

In reality, scientists do not believe that there was ever any intelligent life on Earth. Scientists are hoping to discover evidence that some form of life once existed on the Red Planet, but it would be in basic, microbial form.

The following are excerpt from Wikipedia on pareidolia, which describes the mind’s tendency to perceive patterns like faces where none exist.

Pareidolia is a psychological phenomenon in which the mind responds to a stimulus, usually an image or a sound, by perceiving a familiar pattern where none exists.

Common examples are perceived images of animals, faces, or objects in cloud formations, the Man in the Moon, the Moon rabbit, hidden messages within recorded music played in reverse or at higher- or lower-than-normal speeds, and hearing indistinct voices in random noise such as that produced by air conditioners or fans.

The following is an excerpt on Cydonia, a region on Mars that is a good example of pareidolia.

Cydonia is a region on the planet Mars that has attracted both scientific and popular interest. The name originally referred to the albedo feature (distinctively coloured area) that was visible from Earthbound telescopes. The area borders plains of Acidalia Planitia and the Arabia Terra highlands. The area includes the regions: “Cydonia Mensae”, an area of flat-topped mesa-like features, “Cydonia Colles”, a region of small hills or knobs, and “Cydonia Labyrinthus”, a complex of intersecting valleys. As with other albedo features on Mars, the name Cydonia was drawn from classical antiquity, in this case from Kydonia, a historic polis (or “city-state”) on the island of Crete. Cydonia contains the “Face on Mars”, located about halfway between Arandas Crater and Bamberg Crater.

The Cydonia facial pareidolia inspired individuals and organizations interested in extraterrestrial intelligence and visitations to Earth, and the images were published in this context in 1977. Some commentators, most notably Richard C. Hoagland, believe the “Face on Mars” to be evidence of a long-lost Martian civilization along with other features they believe are present, such as apparent pyramids, which they argue are part of a ruined city.

While accepting the “face” as a subject for scientific study, astronomer Carl Sagan criticized much of the speculation concerning it in the chapter “The Man in the Moon and the Face on Mars” in his book The Demon-Haunted World. The “face” is also a common topic among skeptics groups, who use it as an example of credulity. They point out that there are other faces on Mars but these do not elicit the same level of study. One example is the Galle Crater, which takes the form of a smiley, while others resemble Kermit the Frog or other celebrities. On this latter similarity, Discover magazine’s “Skeptical Eye” column ridiculed Hoagland’s claims, asking if he believed the aliens were fans of Sesame Street.

The following is a Wikipedia excerpt on the Mars Curiosity rover.

Curiosity is a car-sized rover designed to explore Gale Crater on Mars as part of NASA’s Mars Science Laboratory mission (MSL). Curiosity was launched from Cape Canaveral on November 26, 2011, at 15:02 UTC aboard the MSL spacecraft and landed on Aeolis Palus in Gale Crater on Mars on August 6, 2012, 05:17 UTC. The Bradbury Landing site was less than 2.4 km (1.5 mi) from the center of the rover’s touchdown target after a 560 million km (350 million mi) journey.[14] The rover’s goals include an investigation of the Martian climate and geology; assessment of whether the selected field site inside Gale Crater has ever offered environmental conditions favorable for microbial life, including investigation of the role of water; and planetary habitability studies in preparation for human exploration.

In December 2012, Curiosity’s two-year mission was extended indefinitely. On August 5, 2017, NASA celebrated the fifth anniversary of the Curiosity rover landing and related exploratory accomplishments on the planet Mars.

Curiosity’s design will serve as the basis for the planned Mars 2020 rover. As of April 29, 2018, Curiosity has been on Mars for 2037 sols (2092 total days) since landing on August 6, 2012.

Shigir Idol could be oldest piece of monumental art: study

The wooden statue once reportedly stood 16 feet tall, and was created right after the Ice Age ended

When did the notion of art begin? What is the first piece of artwork to be created by a human? These are questions that have long been debated among archaeologists and anthropologists. However, a recent study published in the journal Antiquity, makes the case that there are  compelling reasons to believe that a statue known as the Shigir Idol—which was discovered in 1894 in Russia—is being reconsidered as one of the oldest examples of monumental art.
According to researchers from the Russian Academy of Sciences and the University of Göttingen, a new analysis calculated that it could be nearly 11,600 years ago. To put it in context, the Ice Age ended about 11,700 years ago. It was previously on display at a Russian museum. Researchers thought it was only a few thousand years old, but the recent round of radiocarbon dating suggests otherwise. Additional examination reportedly discovered new markings on the statue that once stood 16 feet tall.

As the abstract of the study explains:

“Recent application of new analytical techniques has led to the discovery of new imagery on its surface, and has pushed the date of the piece back to the earliest Holocene. The results of these recent analyses are placed here in the context of local and extra-local traditions of comparable prehistoric art. This discussion highlights the unique nature of the find and its significance for appreciating the complex symbolic world of Early Holocene hunter-gatherers.”

“We have to conclude hunter-gatherers had complex ritual and expression of ideas. Ritual doesn’t start with farming, but with hunter-gatherers,” Thomas Terberger said, via Science magazine, an archaeologist at the University of Göttingen in Germany and a co-author of the study.

Science magazine writer Andrew Curry reported that the first radiocarbon analysis of the statue suggested the piece was 9,800 years old. The age of the piece caused controversy because some scientists reportedly claimed that hunter-gatherers couldn’t have created such a piece of art. The researchers from the latest study took samples from the piece in 2014 which helped discover its new age.

“The further you go inside, the older [the date] becomes—it’s very indicative some sort of preservative or glue was used” Olaf Jöris, an archaeologist at the Monrepos Archaeological Research Centre and Museum for Human Behavioural Evolution—who wasn’t involved with the study—told Science magazine.

Peter Vang Petersen, an archaeologist at The National Museum of Denmark in Copenhagen (who also was not involved with the study) explained to Science magazine that prehistoric art changed as Earth transitioned out of the Ice Age.

“Figurative art in the Paleolithic and naturalistic animals painted in caves and carved in rock all stop at the end of the ice age. From then on, you have very stylized patterns that are hard to interpret,” Petersen said. “They’re still hunters, but they had another view of the world.”

Coauthor and archaeologist Mikhail Zhilin of the Russian Academy of Sciences in Moscow told Science magazine the piece of art could depict local forest spirits or demons.

Little is known about the society that carved the idol though. According to Science magazine, Zhilin has returned to the site to excavate. Zhilin and his team have reportedly found small bone points, daggers and elk antlers with carvings of animal faces. He also commented that their knowledge on how to handle wood is impressive.

“They knew how to work wood perfectly,” Zhilin said.

Thomas told the magazine “Wood normally doesn’t last.”

“I expect there were many more of these and they’re not preserved,” he said.

It is this kind of human curiosity that leads to discovery and advancement of mankind. At the very least, the study contributes to the answer to the question of how do we know we know what we know.

Life-size holograms for 3D video-conferences developed

Scientists have developed the world’s first truly holographic video-conference system, which allows people in different locations to appear before one another in life-size 3D, as if they were in the same room.

Capturing the remote 3D image with an array of depth cameras, the team has ‘teleported’ live, 3D images of a human from one room to another – a feat that is set to revolutionise human telepresence.

Scientists have developed the world’s first truly holographic video-conference system, which allows people in different locations to appear before one another in life-size 3D, as if they were in the same room. Using a ring of intelligent projectors mounted above and around a retro-reflective, human-sized cylindrical pod, researchers from Queen’s University in Canada were able to project objects as light fields that can be walked around and viewed from all sides simultaneously by multiple users – much like Star Trek’s famed, fictional ‘holodeck’.

Capturing the remote 3D image with an array of depth cameras, the team has ‘teleported’ live, 3D images of a human from one room to another – a feat that is set to revolutionise human telepresence. Due to the display projects a light field with many images, one for every degree of angle, users need not wear a headset or 3D glasses to experience each other in augmented reality.

“Face-to-face interaction transfers an immense amount of non-verbal information,” said Roel Vertegaal, a professor at the Queen’s University. “This information is lost in online tools, promoting poor online behaviours. Users miss the proxemics, gestures, facial expressions, and eye contact that bring nuance, emotional connotation and ultimately empathy to a conversation,” Vertegaal said.

“TeleHuman 2 injects these missing elements into long-distance conversations with a realism that cannot be achieved with a Skype or Facetime video chat,” he said. Vertegaal first debuted the TeleHuman technology in 2012, but at that time the device only allowed for a single viewer to see the holographic projection correctly. With TeleHuman 2, multiple participants are able to see their holographic friend or colleague, each from their individual perspective.

To test the system, Vertegaal had users judge angles at which a robotic arrow, mounted on a tripod, was pointing whilst physically present in the room, and whilst rendered on the TeleHuman 2. They did not judge the angles between the real and the virtual representation as significantly different.

“In a professional environment like a meeting, our latest edition of TeleHuman technology will do wonders for attendees looking to address colleagues with eye contact or to more effectively manage turn taking” said Vertegaal. “But it has potential beyond professional situations. Think again of a large music festival, and now imagine a performer capable of appearing simultaneously, and in true 3D, on TeleHuman 2 devices throughout the venue – bringing a whole new level of audience intimacy to a performance,” he said.

“The TeleHuman technology could even mitigate environmental impacts of business travel – enabling organisations to conduct more engaging and effective meetings from a distance, rather than having to appear in person,” he added.

Computronium universe – computation limits of computronium and limits to the universe

He discusses this happening within 200 years if wormholes or some other means allow faster than light travel.What would the computation limits of computronium be?

There are several physical and practical limits to the amount of computation or data storage that can be performed with a given amount of mass, volume, or energy:

* The Bekenstein bound limits the amount of information that can be stored within a spherical volume to the entropy of a black hole with the same surface area.
* Thermodynamics limit the data storage of a system based on its energy, number of particles and particle modes. In practice it is a stronger bound than Bekenstein bound.
* Landauer’s principle defines a lower theoretical limit for energy consumption: kT ln 2 joules consumed per irreversible state change, where k is the Boltzmann constant and T is the operating temperature of the computer * Reversible computing is not subject to this lower bound. T cannot, even in theory, be made lower than 3 kelvins, the approximate temperature of the cosmic microwave background radiation, without spending more energy on cooling than is saved in computation.
* Bremermann’s limit is the maximum computational speed of a self-contained system in the material universe, and is based on mass-energy versus quantum uncertainty constraints.
* The Margolus–Levitin theorem sets a bound on the maximum computational speed per unit of energy: 6 × 10 33 operations per second per joule. This bound, however, can be avoided if there is access to quantum memory. Computational algorithms can then be designed that require arbitrarily small amount of energy/time per one elementary computation step.

It is unclear what the computational limits are for quantum computers.

In The Singularity is Near, Ray Kurzweil cites the calculations of Seth Lloyd that a universal-scale computer is capable of 10 90 operations per second. This would likely be for the observable universe reachable at near light speed. The mass of the universe can be estimated at 3 × 10 52kilograms. If all matter in the universe was turned into a black hole it would have a lifetime of 2.8 × 10 139 seconds before evaporating due to Hawking radiation. During that lifetime such a universal-scale black hole computer would perform 2.8 × 10 229 operations.

The universe itself is vastly bigger than the observable universe. If the speed of light is not a limit, then travel throughout the multiverse may also not be limited.

If faster than light is possible can we go to limits of this universe or the multiverse?

The limit on our observation universe is not the age of the universe and the speed of light which would be 13.799 billion light-years for two reasons.

Eyes wide, jaw agape: See the new map of the Milky Way

Photo of the year: Osprey catches a shark holding a fish

Thursday, April 26, 2018, 7:11 PM – Are you ready to see the most detailed map of the Milky Way, ever, tracking the motions of around 1.7 billion of our galaxy’s stars? Check out the amazing gift that the ESA’s Gaia mission just delivered to us all!

After nearly two years of scanning the immense galaxy that surrounds our tiny world, the European Space Agency’s Gaia telescope has returned its second batch of data. The first batch, which we got a look at back in September of 2016, gave us the positions and motions of around 200 million stars – quite an accomplishment.

Now, as of April 25, 2018, Gaia lets us see the detailed locations and motions of 1.7 BILLION stars!


Gaia’s all-sky view of our Milky Way Galaxy and neighbouring galaxies, based on measurements of nearly 1.7 billion stars. The map shows the total brightness and colour of stars observed by the ESA satellite in each portion of the sky between July 2014 and May 2016. Click or tap to view the full 46 million pixel version. It’s a big file (15.6MB), but it’s TOTALLY WORTH IT! Credit: ESA/Gaia/DPAC

This static image isn’t the only way we can experience this new map, though. The ESA team performed an amazing feat of stellar cartography, and put together a complete, 360-degree presentation of Gaia’s results, complete with the motions of the stars!

Watch below, and pan about in this 360 degree, three-dimensional map of the Milky Way.


If you want more time to peruse the above map, be sure to pause the video, and then pan around for as long as you want. Let it play through at some point, though, so you can see the movement of the stars as well!

Do stars move that much, though, in relation to one another? The stars in our galaxy all orbit around the core, of course. Some have been found shooting across our field of view at high-speed, likely due to being ejected from their original star system. Watching the motions in the video just below, however, most of the stars appear to travel in loops, with the closer stars going through larger loops than those farther away.

These loops aren’t the actual motion of the stars, themselves. Instead, they show the “apparent” motion, caused by changing perspective of the telescope, as it observes the locations of the stars while it (and Earth) travel in an elliptical orbit around the Sun. This ‘parallax motion’ is extremely useful to astronomers, since it allows them to measure the distance to the star, and now – thanks to Gaia – they have the most accurate collection of parallax data ever collected.

According to the ESA:

The new data release, which covers the period between 25 July 2014 and 23 May 2016, pins down the positions of nearly 1.7 billion stars, and with a much greater precision. For some of the brightest stars in the survey, the level of precision equates to Earth-bound observers being able to spot a Euro coin lying on the surface of the Moon.
With these accurate measurements it is possible to separate the parallax of stars – an apparent shift on the sky caused by Earth’s yearly orbit around the Sun – from their true movements through the Galaxy.
The new catalogue lists the parallax and velocity across the sky, or proper motion, for more than 1.3 billion stars. From the most accurate parallax measurements, about ten per cent of the total, astronomers can directly estimate distances to individual stars.

An exciting part of this data release is how it affects other missions and surveys that are exploring our galaxy, such as the Kepler Space Telescope.

Megan Bedell@meg_bedell

the @NASAKepler field in action with proper motions!

According to Bedell, who is an astronomer with the Center for Computational Astrophysics, at the Flatiron Institute, in New York City, the dark blue stars in the animation above are those closest to us, while the light green ones are farthest away.

Knowing the timing of when these stars enter and leave the Kepler field of view can help astronomers better understand the data they have from the mission, which is used to locate alien planets, as they transit across the face of their star.

Stay tuned for more from Gaia! As stated above, this data only covers up until May 23, 2016. It’s been nearly two years since then, and the mission is expected to run for at least five years total (ending in July of 2019, according to the ESA), so we’ll be seeing even better, more detailed maps in the future.

Sources: ESAESA (Gaia Summary)ESA (Gaia FAQ)

How much better is Gaia’s latest view of the galaxy? JUST WATCH!