https://www.healthline.com/health-news/are-cannabis-abuse-and-accelerated-brain-aging-linked#The-study

Controversial Study Links Cannabis Use to Accelerated Brain Aging
A new study claims cannabis use may cause accelerated brain aging, but experts say the findings appear to “prioritize marketing over science.”

Can cannabis use cause accelerated brain aging? Illustration by Ruth Basagoitia
A new study has identified cannabis, alcohol, and certain mental disorders as primary drivers of brain aging.

Billed as the largest known brain imaging study, utilizing more than 60,000 SPECT scans, the research looks impressive.

It also supports an enticing prospect: being able to look at images of the brain to see whether or not it’s prematurely aging.

But experts have called the research into question over the methodology, which has a long history of criticism among members of the medical community.

The study
The research was conducted by Dr. Daniel Amen, a psychiatrist who runs Amen Clinics which specializes in SPECT (single photon emission computed tomography), as well as scientists from Google, UCLA, and Johns Hopkins University.

In total, the team analyzed 62,454 SPECT scans of more than 30,000 patients ranging in age from under 1 year to 105 years old. The scans used for the study were all drawn from patients at Amen’s clinics.

“Based on one of the largest brain imaging studies ever done, we can now track common disorders and behaviors that prematurely age the brain. Better treatment of these disorders can slow or even halt the process of brain aging,” said lead author Daniel Amen.

On the surface, the concept of the study appears straightforward: like other parts of the human body, stress and strain can accelerate the aging process.

For an athlete this could show up in shoulders or knees because of years of physical activity. For someone with a history of heavy alcohol use it could be the liver.

The idea is that, at a given age, your body and organs should look and function a certain way and the brain is no different.

The effects of different conditions on the brain — such as substance use or mental disorders — can cause brains to age prematurely, resulting in lower cognitive function, declining memory, and an increased risk of Alzheimer’s disease and dementia.

Using SPECT imaging technology and looking at blood perfusion (blood flow) in the brain, the researchers compared what they saw to the actual chronological age of the brain and established a “brain estimated age,” — how much the brain appeared to have aged.

Blood perfusion in the brain is known to change over time, and the researchers contend that using it as a biomarker could “powerfully predict chronological age and will vary as a function of common psychiatric brain disorders.”

The conditions they studied as drivers of brain aging included dementia, ADHD, major depression, bipolar disorder, generalized anxiety disorder, traumatic brain injury, schizophrenia, alcohol use disorder, and cannabis use disorder.

The findings
Of these conditions, schizophrenia contributed to brain aging the most with an average of four years of premature aging, followed by cannabis abuse (2.8 years), bipolar disorder (1.6 years), ADHD (1.4 years), and alcohol abuse (0.6 years).

Part of what has brought attention to the study is the simple fact that cannabis use sits so high on that list.

“People sort of think of it as an innocuous drug, but that’s not what our studies or our experience is telling us,” Amen told Healthline. “The evidence I have from the world’s largest imaging database and experience over the last nearly 40 years is that it harms the brain. It decreases blood flow to the brain, it makes your brain more toxic.”

However, that assertion, like others in the study, has come under fire.

The controversy
The scientific literature on marijuana and Alzheimer’s is by no means clear cut.

Some studies have identified both THC and CBD — two of the many chemical components found within marijuana — as having potentially beneficial effects on Alzheimer’s and dementia.

Advocates disagree with how the drug is being characterized by Amen.

“There is some exciting preliminary data, based largely on animal models, that components in cannabis can be neuroprotective and may possibly hold keys to addressing the aging process in the brain and/or age-related disorders,” Paul Armentano, the deputy director of the National Organization for the Reform of Marijuana Laws (NORML), told Healthline.

“Obviously, these findings and their implications appear to be opposite of those suggested by Dr. Amen,” Armentano said.

Amen’s findings concerning the impact of marijuana use on brain aging may have been the most visible issue stemming from the study, but it’s hardly the only bone others have had to pick with him and his work.

To say that Amen has a notorious reputation within the medical community would be an understatement.

He’s been called a fraud, a snake oil salesman, and a huckster.

One expert at UCLA’s Brain Research Institute contacted by Healthline said “I have nothing to add beyond what has already been said by others,” and suggested he should be investigated by the medical board.

He declined to comment further.

The issue that many take with Amen is his use of SPECT. The technology itself is nothing new — it’s been around for about three decades.

It uses radioactive tracers injected into the blood, which can be used to measure blood flow in the organs of the body or help detect and diagnose coronary artery disease or abnormalities in the brain.

Amen’s work uses SPECT to look at blood perfusion and activity in the brain to assist in identifying and diagnosing mental disorders, a contentious practice that’s frowned upon by his peers.

“Psychiatrists are the only medical doctors who virtually never look at the organ that they treat,” said Amen. He makes the intuitive argument that biomarkers and functional brain imaging should be used in psychiatry.

That theory isn’t supported by many outside of a minority of mental health practitioners.

In 2012, the American Psychiatric Association issued a consensus report on the use of neuroimaging for psychiatric disorders and stated, “currently neuroimaging is not recommended within either the U.S. or the European practice guidelines for positively defining diagnosis of any primary psychiatric disorder.”

Seth J. Gillihan, PhD, a clinical psychologist and clinical assistant professor of psychology in the Department of Psychiatry at the University of Pennsylvania, has published on the topic of Amen’s work and how it’s situated within these guidelines.

He says that using brain imaging diagnostically is definitely plausible because of the way some mental illnesses have identifiable biological markers, but the problems preventing its implementation are manifold.

Regarding Amen’s new study: its finding are exciting — if in fact they’re true.

“There may be multiple things that affect blood flow to the brain and we shouldn’t necessarily conclude that just because something has the same correlation with blood flow to the brain that age does, that therefore that condition is causing age-related changes in the brain,” he told Healthline.

For those looking in from the outside, Amen’s work can appear baffling and pseudoscientific.

But Amen argues that others in his field don’t share the same level of expertise that he and his team do from years and years and thousands of scans using SPECT technology.

“We have a database of 150,000 scans on patients from 120 countries. When we see a scan we really have a good sense of what it means,” he told Healthline.

For many, including Gillihan, that assertion isn’t enough to justify the claims made by Amen in his research.

It’s an attractive prospect to ascertain specific knowledge on mental health conditions, but experts say the technology just isn’t capable of doing this — yet.

As for the specific claims of this study?

“The researchers seem to have ignored or dismissed obvious alternative explanations for their findings, focusing their discussion instead on promoting the usefulness of SPECT scans for determining a person’s ‘Brain Estimated Age.’ As such, this study appears to prioritize marketing over science,” said Gillihan.

https://cleantechnica.com/2018/10/07/1-highest-grossing-car-in-usa-tesla-model-3-model-y-will-crush-tesla-killers-german-wake-up-call-cleantechnica-top-20/

#1 Highest Grossing Car In USA = Tesla Model 3 … Model Y Will Crush “Tesla Killers” … German Wake-Up Call … (#CleanTechnica Top 20 In September)

October 7th, 2018 by Zachary Shahan

Last week was crazy — Tesla’s massive quarterly sales record, our first monthly #Pravduh, the Tesla SEC lawsuit settlement, and more. As a result, I never shared the most popular CleanTechnica stories of September. So, in case you are curious, here were the 20 stories in September that ruled the odd and quirky land of CleanTechnica world, which I guess is just CleanTechnica (pageviews following titles):

Tesla Model 3 = #1 Best Selling Car In The US (In Revenue) — 247,000
4 “Tesla Killers” Ready To Be Crushed By Tesla Model Y — 127,000
Tesla, An Uncomfortable Wake-Up Call For Germany. All Hands On Deck! — 105,000
Bombshell: Tesla Announcement Implies HUGE Quarter 3 — 63,000

Look Out, Jaguar, Mercedes, Audi, & BMW — Kia Niro SUV Has Better Efficiency & Range, At Half The Price! — 62,000

Remember That Time Ford Went Private? Elon Musk & Henry Ford Both Irritated By Short-Term Thinkers — 44,000

Chart: Global Shifts In EV Battery Chemistry (+ Electric Car Sales Grow 66%) — 42,000

“Golden Sandwich” Photoelectrode Harvests 85% Of Sunlight — 37,000
Tesla Reaches Out To Model 3 Crash Victim Within Minutes — 37,000
Electric Tractors Have Advantages Over Diesels — 37,000
Nissan’s Long Strange Trip With LEAF Batteries — 35,000

Over 700,000 Diesel Vehicles Must Be Recalled By Daimler — 33,000
Production Hell For Hyundai Kona Electric? — 32,000

Tesla Model 3 Performance Is A Freakin’ Race Car — Unbeatable — 29,000
The New MATE X Folding E-Bike Gives Commuters An Affordable, Electrified Alternative — 26,000

Tesla In 2025 Bigger Than Toyota In 2017 — Forecast — 25,000
US BMW 3 & 4 Series Sales Roll Down Hill — 27,000

Can Tesla Crack The Truck Market With An Electric Pickup Truck? — 25,000
Solar Diverters Can Supply Solar Hot Water From A Rooftop Array, Without Adding Any Extra Plumbing — 24,000

“The Fixer” — Tesla’s New President — 22,000
Again, you must share every single article with friends or you are officially banished from this kingdom. (You may return in 10 minutes, though.)

https://www.zdnet.com/article/firefox-and-edge-add-support-for-googles-webp-image-format/

Firefox and Edge add support for Google’s WebP image format
WebP image format gets new life courtesy of Microsoft and Mozilla. Apple is last browser maker without WebP support.

Catalin Cimpanu
By Catalin Cimpanu for Zero Day | October 6, 2018

The WebP image format developed by Google for the past eight years has found a home this week in Microsoft’s Edge browser, and will also be added in Firefox next year.

The rising tension between IoT and ERP systems

The Internet of Things is the new frontier. However, generations of ERP systems were not designed to handle global networks of sensors and devices.

WebP is a lossy and lossless image compression format that was born as a derivate project from Google’s work on the VP8 video format. It was released in 2010, and it was advertised as a replacement for PNG, JPEG, and GIF at the same time, supporting good compression, transparency, and animations.

Early benchmarks showed that WebP cut down PNG size by as much as 45 percent and animated GIF size with up to 65 percent.

The format was initially supported only by Google Chrome, but it was later also adopted by the Opera and Pale Moon browsers.

Major Google sites, such as Gmail, Google Search, Google Play, Picasa, and others were modified to use WebP, and defaulted to existing image formats if users’ browsers didn’t support it.

But despite its early success, WebP’s spread hit a wall in 2016, when both Apple and Mozilla showed initial interest in supporting it, but later backed down.

While Mozilla tested the format in Firefox, engineers abandoned WebP a few months later, citing inconclusive internal benchmarks that didn’t show any advantage over a new optimized JPEG library it launched at the time.

Apple similarly added WebP support in iOS 10 and MacOS Sierra, but later replaced it with HEIF, an image format based on the HEVC video compression standard (also known as H.265 and MPEG-H Part 2).

Many considered Apple and Mozilla’s rejections as nails in WebP’s coffin, as no image format would be able to make it without support from major browser and OS vendors. But two years later, in the span of a week, the image format came back to life out of nowhere.

The latest version of the Edge browser that was launched this week with the Windows 10 October 2018 Update now fully supports WebP in all its glory. This is Microsoft Edge build 17763+.

Furthermore, on Friday, ZDNet’s sister site CNET also spotted new activity in WebP’s two-year-old topic on Mozilla’s bug tracker.

“Mozilla is moving forward with implementing support for WebP,” a Mozilla spokesperson confirmed to CNET yesterday. WebP will be added to Firefox for desktop and Android, but not iOS. This is because Firefox for iOS works on Safari’s WebKit engine, and not Mozilla’s Gecko.

Apple has not announced plans to support WebP, leaving Safari as the last major browser not to support it. But with WebP supported in almost all major browsers and image editing software, Apple has little choice left.

Mozilla also said that besides WebP, Firefox would get support next year for AVIF, an even better image format based on the AV1 open-source video compression format developed by Google, Cisco, Mozilla, and other tech giants, and which has recently received rave reviews from Facebook’s engineering staff.

https://www.express.co.uk/life-style/science-technology/1027427/Google-Chrome-update-brings-automatic-sign-in-feature

Google Chrome has a handy new feature but here’s why you may want to turn it off
GOOGLE Chrome has received a handy new feature that many users are surely going to appreciate, but here is why some fans of the browser may want to turn it off.
By JOSEPH CAREY
PUBLISHED: 09:00, Sun, Oct 7, 2018

Google Chrome is the Silicon Valley giant’s web browser that is incredibly popular.

In fact, the client is considered to be the most popular across the globe.

Google Chrome is praised for being incredibly quick, very accessible and laudably customisable.

Chrome allows for a variety of extensions to be added to the client that add a host of useful functions for users.

Google regularly updates the internet browser – these range from less noticeable tweaks to more substantial overhauls.

Recently the tech company announced it was making some changes to how signing-in to Chrome works.

Now, whenever users sign-in to a Google website they will automatically be logged-in to Chrome.

And to display such a change, the user’s account picture will appear in the right-hand corner of the client.

Similarly, if someone signs out of a Google website, they will be logged out of Chrome too.

Explaining the change, the tech company said: “We recently made a change to simplify the way Chrome handles sign-in. Now, when you sign into any Google website, you’re also signed into Chrome with the same account.

“You’ll see your Google Account picture right in the Chrome UI, so you can easily see your sign-in status. When you sign out, either directly from Chrome or from any Google website, you’re completely signed out of your Google Account.”

And Google emphasised that signing-in on a Google website will not turn on Chrome sync.

The firm went on: “We want to be clear that this change to sign-in does not mean Chrome sync gets turned on. Users who want data like their browsing history, passwords, and bookmarks available on other devices must take additional action, such as turning on sync.

“The new UI reminds users which Google Account is signed in. Importantly, this allows us to better help users who share a single device (for example, a family computer).

“Over the years, we’ve received feedback from users on shared devices that they were confused about Chrome’s sign-in state.

“We think these UI changes help prevent users from inadvertently performing searches or navigating to websites that could be saved to a different user’s synced account.”

While the new change makes signing-in on Chrome simpler than ever, users who may want to prevent their search history from being tied to their account after using a Google website may want to turn the feature off.

And Google has acknowledged this and made the new tool incredibly easy to disable.

The firm insisted it wanted all Chrome fans to “have more control over their experience”.

In order to disable the addition, users should head to the “privacy and security” section of Chrome settings and make sure the “allow Chrome sign-in” toggle is switched off.

But Google insisted overall it had updated its interfaces to make it clear to users when they are syncing with their Google Account.

It said: “We’re updating our UIs to better communicate a user’s sync state. We want to be clearer about your sign-in state and whether or not you’re syncing data to your Google Account.

“We’re also going to change the way we handle the clearing of auth cookies. In the current version of Chrome, we keep the Google auth cookies to allow you to stay signed in after cookies are cleared. We will change this behaviour that so all cookies are deleted and you will be signed out.”

https://next.reality.news/news/snap-ceo-memo-confirms-spectacles-are-first-step-toward-ar-smartglasses-0187939/

Snap CEO Memo Confirms Spectacles Are First Step Toward AR Smartglasses

In a leaked company memo, Snap CEO (and NR30 member) Evan Spiegel has made it clear that the future of the company lies not only augmented reality but also hardware that enables those AR experiences.

In other words, Snap is definitely planning on getting into the smartglasses game.

The memo, as published by Cheddar, addresses the challenges the company has faced in 2018, include the Snapchat redesign that Spiegel admits was “rushed,” while setting the stage for 2019.

Image by Spectacles/YouTube
In the memo, Spiegel emphasizes leadership in augmented reality (as well as profitability) as one of the main vectors by which the company will achieve its near-term goals.

Don’t Miss: Snap Highlights Augmented Reality’s Role in Favorable 2017 Results & Optimistic 2018 Plans
“We’re focused on three core building blocks of augmented reality: understanding the world through our Snapchat camera, providing a platform for creators to build AR experiences, and investing in future hardware to transcend the smartphone,” said Spiegel.

Snap has certainly acted on its masonry metaphor in 2018. It has introduced new AR capabilities, such as speech recognition, sound recognition, and visual search, that demonstrate Snapchat’s improved world understanding. That platform for creators, of course, is Lens Studio, which the company launched late last year and iterated upon throughout 2018 with new features and Lens Explorer in Snapchat for discoverability of AR creations.

It is critical that Snap play a central role in the next transition to computing overlaid on the world… [That] means investing in Spectacles hardware as an enabler of our augmented reality platform.

— Evan Spiegel, CEO, Snap Inc.
And, with Spectacles 2, the company has hinted at the product’s role in the transition between smartphone and smartglasses.

“Throughout computing history, huge amounts of value have been created during ‘platform transitions,’ for example, the transition from mainframe to desktop, or desktop to mobile. It is critical that Snap play a central role in the next transition to computing overlaid on the world,” said Spiegel.

“That starts with unlocking the value of our platform on camera-enabled devices. It also means investing in Spectacles hardware as an enabler of our augmented reality platform. Our investment is a big bet, it’s risky, but if we are successful, it will change the trajectory of Snap and computing as a whole.”

Image by Lens Studio/YouTube
While previous reports have pointed to Snap’s work on AR-enabled Spectacles, Spiegel’s memo essentially confirms the company’s plans to launch AR-capable Spectacles in the future. It remains to be seen, however, if that means that Spectacles 3 will be the company’s first smartglasses offering or just another iteration on the path to AR wearables.

With the Magic Leap One on the market, the second generation of Microsoft’s HoloLens on the way, and reports of forthcoming smartglasses and/or AR headsets from Apple, Google, and Facebook, the race is on among the giants of the tech industry to deliver on the promise of consumer-focused augmented reality wearables.

https://www.theregister.co.uk/2018/10/05/microquasar_black_holes/

It’s over 9,000! Boffin-baffling microquasar has power that makes the LHC look like a kid’s toy

The first detection shows beams powered to over 25 trillion electron volts
By Katyanna Quach 5 Oct 2018 at 22:13 33

Artist’s impression of a quasar in happier times
The first microquasar us Earthlings have detected has left astrophysicists puzzled.

Microquasars are greedy black holes that gobble up material from stars hovering nearby and shoot out powerful gamma ray beams. One particular specimen, codenamed SS 433, emits two jets that have energies measuring at least 25 trillion electron volts (25 x 1012 eV), we learned this week.

An international group of researchers believe the source of radiation is a group of electrons that have energies exceeding hundreds of trillion electron volts trapped in a magnetic field. For reference, the world’s largest and most powerful particle accelerator, CERN’s Large Hadron Collider, maxes out at 14 TeV, while its Large Electron-Positron Collider once hit 209 GeV.

“What’s amazing about this discovery is that all current particle acceleration theories have difficulties explaining the observations,” said Hui Li, coauthor of a study of SS 433 published in Nature and a researcher at the Los Alamos National Laboratory in the US. “This surely calls for new ideas on particle acceleration in microquasars and black hole systems in general.”

quasar_radio
Astroboffins spy the brightest quasar that lit the universe’s dark ages

SS 433 is roughly 15,000 light years away, and its jets don’t point directly to Earth, making it difficult to study. The team has to monitor the particles indirectly at the High-Altitude Water Cherenkov Gamma-Ray Observatory nestled in Sierra Negra, an extinct volcano in Mexico.

The observatory houses more than 300 giant water tanks, each stretching about 24 feet in diameter. Whenever a particle from the microquasar collides with the water, a flash of blue light is emitted. Known as Cherenkov radiation, it’s produced when a particle from the microquasar travels at speeds faster than the typical speed of the propagation of light in water.

The results show that the particles in the jets are accelerated to energies higher than what can be achieved at the Large Hadron Collider and the Large Electron-Positron Collider. It’s unclear what mechanism is driving these particles to such high energies.

“The new findings improve our understanding of particle acceleration in jets of microquasars, which also sheds light on jet physics in much larger and more powerful extragalactic jets in quasars,” said Hao Zhao, coauthor of the paper and a physicist also working at the Los Alamos National Laboratory.

https://thenextweb.com/contributors/2018/10/06/we-need-to-build-ai-systems-we-can-trust/

We need to build AI systems we can trust
by ALEKSANDRA MOJSILOVIC

Today, artificial intelligence (AI) systems are routinely being used to support human decision-making in a multitude of applications. AI can help doctors to make sense of millions of patient records; farmers to determine exactly how much water each individual plant needs; and insurance companies to assess claims faster. AI holds the promise of digesting large quantities of data to deliver invaluable insights and knowledge.

Yet broad adoption of AI systems will not come from the benefits alone. Many of the expanding applications of AI may be of great consequence to people, communities, or organizations, and it is crucial that we be able to trust their output. What will it take to earn that trust?

Making sure that we develop and deploy AI systems responsibly will require collaboration among many stakeholders, including policymakers and legislators but instrumenting AI for trust must start with science. We as technology providers have the ability — and responsibility — to develop and apply technological tools to engineer trustworthy AI systems.

I believe researchers, like myself, need to shoulder their responsibility and direct AI down the right path. That’s why I’ve outlined here below how we should approach this.

Designing for trust
To trust an AI system, we must have confidence in its decisions. We need to know that a decision is reliable and fair, that it can be accounted for, and that it will cause no harm. We need assurance that it cannot be tampered with and that the system itself is secure.

Reliability, fairness, interpretability, robustness, and safety are the underpinnings of trusted AI. Yet today, as we develop new AI systems and technologies, we mostly evaluate them using metrics such as test/train accuracy, cross validation, and cost/benefit ratio.

We monitor usage and real-time performance, but we do not design, evaluate, and monitor for trust. To do so, we must start by defining the dimensions of trusted AI as scientific objectives, and then craft tools and methodologies to integrate them into the AI solution development process.

We must learn to look beyond accuracy alone and to measure and report the performance of the system along each of these dimensions. Let’s take a closer look at four major parts of the engineering “toolkit” we have at our disposal to instrument AI for trust.

1. Fairness
The issue of bias in AI systems has received enormous attention recently, in both the technical community and the general public. If we want to encourage the adoption of AI, we must ensure that it does not take on our biases and inconsistencies, and then scale them more broadly.

The research community has made progress in understanding how bias affects AI decision-making and is creating methodologies to detect and mitigate bias across the lifecycle of an AI application: training models; checking data, algorithms, and service for bias; and handling bias if it is detected. While there is much more to be done, we can begin to incorporate bias checking and mitigation principles when we design, test, evaluate, and deploy AI solutions.

2. Robustness
When it comes to large datasets, neural nets are the tool of choice for AI developers and data scientists. While deep learning models can exhibit super-human classification and recognition abilities, they can be easily fooled into make embarrassing and incorrect decisions by adding a small amount of noise, often imperceptible to a human.

Exposing and fixing vulnerabilities in software systems is something the technical community has been dealing with for a while, and the effort carries over into the AI space.

Recently, there has been an explosion of research in this area: new attacks and defenses are continually identified; new adversarial training methods to strengthen against attack and new metrics to evaluate robustness are being developed. We are approaching a point where we can start integrating them into generic AI DevOps processes to protect and secure realistic, production-grade neural nets and applications that are built around them.

3. Explaining algorithmic decisions
Another issue that has been on the forefront of the discussion recently is the fear that machine learning systems are “black boxes,” and that many state-of-the-art algorithms produce decisions that are difficult to explain.

A significant body of new research work has proposed techniques to provide interpretable explanations of black-box models without compromising their accuracy. These include local and global interpretability techniques of models and their predictions, the use of training techniques that yield interpretable models, visualizing information flow in neural nets, and even teaching explanations.

We must incorporate these techniques into AI model development and DevOps workflows to provide diverse explanations to developers, enterprise engineers, users, and domain experts.

4. Safety
Human trust in technology is based on our understanding of how it works and our assessment of its safety and reliability. We drive cars trusting the brakes will work when the pedal is pressed. We undergo eye laser surgery trusting the system to make the right decisions.

In both cases, trust comes from confidence that the system will not make a mistake, thanks to system training, exhaustive testing, experience, safety measures and standards, best practices and consumer education. Many of these principles of safety design apply to the design of AI systems; some will have to be adapted, and new ones will have to be defined.

For example, we could design AI to require human intervention if it encounters completely new situations in complex environments. And, just as we use safety labels for pharmaceuticals and foods, or safety datasheets in computer hardware, we may begin to see similar approaches for communicating the capabilities and limitations of AI services or solutions.

Evolving AI in an agile and open way
Every time a new technology is introduced, it creates new challenges, safety issues, and potential hazards. As the technology develops and matures, these issues are better understood and gradually addressed.

For example, when pharmaceuticals were first introduced, there were no safety tests, quality standards, childproof caps, or tamper-resistant packages. AI is a new technology and will undergo a similar evolution.

Recent years have brought extraordinary advances in terms of technical AI capabilities. The race to develop better, more powerful AI is underway. Yet our efforts cannot be solely directed towards making impressive AI demonstrations. We should invest in capabilities that will make AI not just smart, but also responsible.

As we move forward, I believe researchers, engineers, and designers of AI technologies should be working with users, stakeholders, and experts from a range of disciplines to understand their needs, to continually assess the impact and implications of algorithmic decision-making, to share findings, results and ideas, and address issues proactively, in an open and agile way. Together, we can create AI solutions that inspire confidence.

https://singularityhub.com/2018/10/07/ais-kicking-space-exploration-into-hyperdrive-heres-how/

AI Is Kicking Space Exploration into Hyperdrive—Here’s How
By Marc Prosser and Jovan David Rebolledo – Oct 07, 2018

Artificial intelligence in space exploration is gathering momentum. Over the coming years, new missions look likely to be turbo-charged by AI as we voyage to comets, moons, and planets and explore the possibilities of mining asteroids.

“AI is already a game-changer that has made scientific research and exploration much more efficient. We are not just talking about a doubling but about a multiple of ten,” Leopold Summerer, Head of the Advanced Concepts and Studies Office at ESA, said in an interview with Singularity Hub.

Examples Abound
The history of AI and space exploration is older than many probably think. It has already played a significant role in research into our planet, the solar system, and the universe. As computer systems and software have developed, so have AI’s potential use cases.

The Earth Observer 1 (EO-1) satellite is a good example. Since its launch in the early 2000s, its onboard AI systems helped optimize analysis of and response to natural occurrences, like floods and volcanic eruptions. In some cases, the AI was able to tell EO-1 to start capturing images before the ground crew were even aware that the occurrence had taken place.

Other satellite and astronomy examples abound. Sky Image Cataloging and Analysis Tool (SKICAT) has assisted with the classification of objects discovered during the second Palomar Sky Survey, classifying thousands more objects caught in low resolution than a human would be able to. Similar AI systems have helped astronomers to identify 56 new possible gravitational lenses that play a crucial role in connection with research into dark matter.

AI’s ability to trawl through vast amounts of data and find correlations will become increasingly important in relation to getting the most out of the available data. ESA’s ENVISAT produces around 400 terabytes of new data every year—but will be dwarfed by the Square Kilometre Array, which will produce around the same amount of data that is currently on the internet in a day.

AI Readying For Mars
AI is also being used for trajectory and payload optimization. Both are important preliminary steps to NASA’s next rover mission to Mars, the Mars 2020 Rover, which is, slightly ironically, set to land on the red planet in early 2021.

An AI known as AEGIS is already on the red planet onboard NASA’s current rovers. The system can handle autonomous targeting of cameras and choose what to investigate. However, the next generation of AIs will be able to control vehicles, autonomously assist with study selection, and dynamically schedule and perform scientific tasks.

Throughout his career, John Leif Jørgensen from DTU Space in Denmark has designed equipment and systems that have been on board about 100 satellites—and counting. He is part of the team behind the Mars 2020 Rover’s autonomous scientific instrument PIXL, which makes extensive use of AI. Its purpose is to investigate whether there have been lifeforms like stromatolites on Mars.

“PIXL’s microscope is situated on the rover’s arm and needs to be placed 14 millimetres from what we want it to study. That happens thanks to several cameras placed on the rover. It may sound simple, but the handover process and finding out exactly where to place the arm can be likened to identifying a building from the street from a picture taken from the roof. This is something that AI is eminently suited for,” he said in an interview with Singularity Hub.

AI also helps PIXL operate autonomously throughout the night and continuously adjust as the environment changes—the temperature changes between day and night can be more than 100 degrees Celsius, meaning that the ground beneath the rover, the cameras, the robotic arm, and the rock being studied all keep changing distance.

“AI is at the core of all of this work, and helps almost double productivity,” Jørgensen said.

First Mars, Then Moons
Mars is likely far from the final destination for AIs in space. Jupiter’s moons have long fascinated scientists. Especially Europa, which could house a subsurface ocean, buried beneath an approximately 10 km thick ice crust. It is one of the most likely candidates for finding life elsewhere in the solar system.

While that mission may be some time in the future, NASA is currently planning to launch the James Webb Space Telescope into an orbit of around 1.5 million kilometers from Earth in 2020. Part of the mission will involve AI-empowered autonomous systems overseeing the full deployment of the telescope’s 705-kilo mirror.

The distances between Earth and Europa, or Earth and the James Webb telescope, means a delay in communications. That, in turn, makes it imperative for the crafts to be able to make their own decisions. Examples from the Mars Rover project show that communication between a rover and Earth can take 20 minutes because of the vast distance. A Europa mission would see much longer communication times.

Both missions, to varying degrees, illustrate one of the most significant challenges currently facing the use of AI in space exploration. There tends to be a direct correlation between how well AI systems perform and how much data they have been fed. The more, the better, as it were. But we simply don’t have very much data to feed such a system about what it’s likely to encounter on a mission to a place like Europa.

Computing power presents a second challenge. A strenuous, time-consuming approval process and the risk of radiation mean that your computer at home would likely be more powerful than anything going into space in the near future. A 200 GHz processor, 256 megabytes of ram, and 2 gigabytes of memory sounds a lot more like a Nokia 3210 (the one you could use as an ice hockey puck without it noticing) than an iPhone X—but it’s actually the ‘brain’ that will be onboard the next rover.

Private Companies Taking Off
Private companies are helping to push those limitations. CB Insights charts 57 startups in the space-space, covering areas as diverse as natural resources, consumer tourism, R&D, satellites, spacecraft design and launch, and data analytics.

David Chew works as an engineer for the Japanese satellite company Axelspace. He explained how private companies are pushing the speed of exploration and lowering costs.

“Many private space companies are taking advantage of fall-back systems and finding ways of using parts and systems that traditional companies have thought of as non-space-grade. By implementing fall-backs, and using AI, it is possible to integrate and use parts that lower costs without adding risk of failure,” he said in an interview with Singularity Hub.

Terraforming Our Future Home
Further into the future, moonshots like terraforming Mars await. Without AI, these kinds of projects to adapt other planets to Earth-like conditions would be impossible.

Autonomous crafts are already terraforming here on Earth. BioCarbon Engineering uses drones to plant up to 100,000 trees in a single day. Drones first survey and map an area, then an algorithm decides the optimal locations for the trees before a second wave of drones carry out the actual planting.

As is often the case with exponential technologies, there is a great potential for synergies and convergence. For example with AI and robotics, or quantum computing and machine learning. Why not send an AI-driven robot to Mars and use it as a telepresence for scientists on Earth? It could be argued that we are already in the early stages of doing just that by using VR and AR systems that take data from the Mars rovers and create a virtual landscape scientists can walk around in and make decisions on what the rovers should explore next.

One of the biggest benefits of AI in space exploration may not have that much to do with its actual functions. Chew believes that within as little as ten years, we could see the first mining of asteroids in the Kuiper Belt with the help of AI.

“I think one of the things that AI does to space exploration is that it opens up a whole range of new possible industries and services that have a more immediate effect on the lives of people on Earth,” he said. “It becomes a relatable industry that has a real effect on people’s daily lives. In a way, space exploration becomes part of people’s mindset, and the border between our planet and the solar system becomes less important.”

https://www.thestar.com/news/world/2018/10/06/the-mushroom-dream-of-a-long-haired-hippie-could-help-save-the-worlds-bees.html

The mushroom dream of a ‘long-haired hippie’ could help save the world’s bees
By EVAN BUSHThe Seattle Times
Sat., Oct. 6, 2018

SEATTLE—The epiphany that mushrooms could help save the world’s ailing bee colonies struck Paul Stamets while he was in bed.

“I love waking dreams,” he said. “It’s a time when you’re just coming back into consciousness.”

Paul Stamets, a renowned expert on mushrooms, nurtures fungi near his home in Shelton, Wash., in a 2010 file image. (JOHN LOK/SEATTLE TIMES / TNS)

Years ago, in 1984, Stamets had noticed a “continuous convoy of bees” travelling from a patch of mushrooms he was growing and his beehives. The bees actually moved wood chips to access his mushroom’s mycelium, the branching fibres of fungus that look like cobwebs.

“I could see them sipping on the droplets oozing from the mycelium,” he said. They were after its sugar, he thought.

Decades later, he and a friend began a conversation about bee colony collapse that left Stamets, the owner of a mushroom mercantile, puzzling over a problem. Bees across the world have been disappearing at an alarming rate. Parasites like mites, fast-spreading viruses, agricultural chemicals and lack of forage area have stressed and threatened wild and commercial bees alike.

Waking up one morning, “I connected the dots,” he said. “Mycelium have sugars and antiviral properties,” he said. What if it wasn’t just sugar that was useful to those mushroom-suckling bees so long ago?

In research published Thursday in the journal Scientific Reports, Stamets turned intuition into reality. The paper describes how bees given a small amount of his mushroom mycelia extract exhibited remarkable reductions in the presence of viruses associated with parasitic mites that have been attacking, and infecting, bee colonies for decades.

In the late 1980s, tiny Varroa mites began to spread through bee colonies in the United States. The mites —which are parasites and can infect bees with viruses —proliferate easily and cause colony collapse in just years.

Over time, colonies have become even more susceptible, and viruses became among the chief threats to the important pollinators for crops on which people rely.

“We think that’s because the viruses have evolved and become pathogenic and virulent,” said Dennis vanEngelsdorp, a University of Maryland professor in entomology, who was not involved in the mycelium research. “Varroa viruses kill most of the colonies in the country.”

He likened the mites to dirty hypodermic needles; the mites are able to spread viruses from bee to bee.

The only practical solution to date has been to keep the number of Varroa mites within beehives “at manageable populations.”

Stamet’s idea about bee-helping mycelium could give beekeepers a powerful new weapon.

At first, mushrooms were a hard sell.

When Stamets, whose fascination with fungi began with “magic mushrooms” when he was a “long-haired hippie” undergraduate at The Evergreen State College, began reaching out to scientists, some laughed him off.

“I don’t have time for this. You sound kind of crazy. I’m gonna go,” he recalled a California researcher telling him. “It was never good to start a conversation with scientists you don’t know saying, ‘I had a dream.’”

When Steve Sheppard, a Washington State University entomology professor, received a call in 2014 from Stamets, however, he listened.

Sheppard has heard a lot of wild ideas to save bees over the years, like harnessing static electricity to stick bees with little balls of Styrofoam coated in mite-killing chemicals. Stamets’ pitch was different: He had data to back up his claims about mycelium’s antiviral properties and his company, Fungi Perfecti, could produce it in bulk. “I had a compelling reason to look further,” Sheppard said.

Together with other researchers, the unlikely pair have produced research that opens promising and previously unknown doors in the fight to keep bee colonies from collapsing.

“This is a pretty novel approach,” vanEngelsdorp said. “There’s no scientist who believes there’s a silver bullet for bee health. There’s too many things going on. … This is a great first step.”

To test Stamets’ theory, the researchers conducted two experiments: They separated two groups of mite-exposed bees into cages, feeding one group sugar syrup with a mushroom-based additive and the other, syrup without the additive. They also field-tested the extract in small, working bee colonies near WSU.

For several virus strains, the extract “reduced the virus to almost nothing,” said Brandon Hopkins, a WSU assistant research professor, another author of the paper.

The promising results have opened the door to new inquiries.

Researchers are still trying to figure out how the mushroom extract works. The compound could be boosting bees’ immune systems, making them more resistant to the virus. Or, the compound could be targeting the viruses themselves.

“We don’t know what’s happening to cause the reduction. That’s sort of our next step,” Sheppard said.

Because the extract can be added to syrups commercial beekeepers commonly use, researchers say the extract could be a practical solution that could scale quickly.

For now, they are conducting more research. On Wednesday, Hopkins and Sheppard spent the day setting up experiments at more than 300 commercial colonies in Oregon.

Meanwhile, Stamets has designed a 3D-printable feeder that delivers mycelia extract to wild bees. He plans to launch the product, and an extract-subscription service next year, to the public.

Stamets said he hopes his fungus extract can forestall the crisis of a world without many of its creatures, including bees. He is alarmed at how fast species are going extinct.

“The loss of biodiversity has ramifications that reverberate throughout the food web,” he said, likening each species to parts of an airplane, that hold the Earth together —until they don’t.

“What rivet will we lose that we’ll have catastrophic failure? I think the rivet will be losing the bees,” he said. “More than one-third of our food supply is dependent on bees.”

https://dailygalaxy.com/2018/10/quantum-life-scientists-create-quantum-artificial-life-for-the-first-time/

“Quantum Life” –Scientists Create ‘Quantum Artificial Life’ For the First Time
Posted on Oct 6, 2018

Exciting new research provides a breakthrough that may eventually help answer the question of whether the origin of life can be explained by quantum mechanics — a new approach to one of the most enduring unsolved mysteries in science: How does life emerge from inert matter, such as the “primordial soup” of organic molecules that once existed on Earth?

For the first time, with a quantum computer, individual living organisms represented at a microscopic level with superconducting qubits were made to “mate,” interact with their environment, and “die” to model some of the major factors that influence evolution.

“The goal of the proposed model is to reproduce the characteristic processes of Darwinian evolution, adapted to quantum algorithms and quantum computing,” reports Science Alerts. To do this the researchers used a five qubit IBM QX4 quantum computer developed by IBM that is accessible through the cloud. Quantum computers make use of qubits, whose information value can be a combination of both one and zero. This property, known as superposition, means that large-scale quantum computers will have vastly more information-processing power than classical computers.

The researchers, led by Enrique Solano from the University of the Basque Country in Spain, coded units of quantum life made up of two qubits (those basic building blocks of quantum physics): one to represent the genotype (the genetic code passed between generations) and one to represent the phenotype (the outward manifestation of that code or the “body”). These units were programmed to reproduce, mutate, evolve and die, in part using quantum entanglement – just as any real living being would.

The new research, published in Scientific Reports on Thursday, is a breakthrough that may eventually help answer the question of whether the origin of life can be explained by quantum mechanics, a theory of physics that describes the universe in terms of the interactions between subatomic particles.

This quantum algorithm simulated major biological processes such as self-replication, mutation, interaction between individuals, and death at the level of qubits. The end result was an accurate simulation of the evolutionary process that play out at the microscopic level, with life, a complex macroscopic feature emerging from inanimate matter. Individuals were represented in the model using two qubits. One qubit represented the individual’s genotype, the genetic code behind a certain trait, and the other its phenotype, or the physical expression of that trait.

To model self-replication, the algorithm copied the expectation value (the average of the probabilities of all possible measurements) of the genotype to a new qubit through entanglement, a process that links qubits so that information is instantaneously exchanged between them. To account for mutations, the researchers encoded random qubit rotations into the algorithm that were applied to the genotype qubits.

The algorithm then modeled the interaction between the individual and its environment, which represented aging and eventually death by taking the new genotype from the self-replicating action in the previous step and transferring it to another qubit via entanglement. The new qubit represented the individual’s phenotype. The lifetime of the individual depends on the information coded in this phenotype.

Finally, these individuals interacted with one another, requiring four qubits (two genotypes and two phenotypes), but the phenotypes only interacted and exchanged information if they met certain criteria as coded in their genotype qubits. The interaction produced a new individual and the process began again. In total, the researchers repeated this process more than 24,000 times.

“Our quantum individuals are driven by an adaptation effort along the lines of a quantum Darwinian evolution, which effectively transfer the quantum information through generations of larger multi-qubit entangled states,” the researchers wrote.

While the computing technology needed to achieve so-called “quantum supremacy” isn’t quite there yet, the work of Solano and his colleagues could eventually lead to quantum computers that can autonomously model evolution without first being fed a human-designed algorithm.

“We leave open the question whether the origin of life is genuinely quantum mechanical,” explain the researchers.

“What we prove here is that microscopic quantum systems can efficiently encode quantum features and biological behaviors, usually associated with living systems and natural selection.”