https://www.theguardian.com/science/2018/apr/26/gene-map-for-depression-sparks-hopes-of-new-generation-of-treatments

‘Gene map for depression’ sparks hopes of new generation of treatments

A 200-strong team of researchers from across the globe have mapped the genetic variants that increase the risk of depression

Scientists hope that with gene map complete they will now be able to understand why depression strikes some people and not others

 Scientists hope that with gene map complete they will now be able to understand why depression strikes some people and not others. Photograph: Getty

Scientists have raised hopes for more effective treatments for depression, a condition that affects over 300 million people globally, after mapping out the genetic foundations of the mental disorder in unprecedented detail.

In the world’s largest investigation into the impact of DNA on the mental disorder, more than 200 researchers identified 44 gene variants that raise the risk of depression. Of those, 30 have never been connected to the condition before.

By tripling the number of gene regions linked to depression, scientists now hope to understand more about why the disorder strikes some but not others, even when they have similar life experiences. The work could also help in the search for drugs to treat the condition which affects as many as one in four people over a lifetime.

“If you have a lower genetic burden of depression, perhaps you are more resistant to the stresses we all experience in life,” said Cathryn Lewis, professor of statistical genetics and a senior author on the study at King’s College London.

Previous work with twins suggests that genetics explains about 40% of depression, with the rest being driven by other biological factors and life experiences. If people are ranked according to the number of genetic risk factors for depression they carry, those in the top 10% are two-and-a-half times more likely to experience depression than those in the bottom 10%, Lewis said.

While the scientists found 44 gene variants linked to depression, these are only a small fraction of the total, because many more will have had too small an effect to be discovered in the latest study. “We know that thousands of genes are involved in depression with each having a very modest effect on a person’s risk,” said Lewis. “There is certainly no single gene for depression.”

Clinical depression is a debilitating condition the causes of which are still largely unknown. According to the World Health Organization, it is the leading cause of disability globally, costing the global economy as much as $1tn annually with no country on the planet immune.

Sufferers can experience a range of “losses” – of appetite, mood, sleep, concentration, love, joy, enthusiasm, energy and serenity. As many as 3% of people with major depressive disorder attempt suicide.

But there are few new treatments in the pipeline, as big pharmaceuticals companies have largely withdrawn from expensive research into the next generation of anti-depressants.

In the study, the researchers pooled seven separate datasets from the UK, the US, Iceland and Denmark, to glean genetic information on 135,000 people who reported having depression, and 345,000 mentally healthy individuals. The scientists then compared DNA across the groups to find gene variants that were more common in those with depression.

The work, published in Nature Genetics, revealed a substantial overlap in the genetics that underpins depression and other mental disorders such as anxiety, schizophrenia and bipolar disorder, but also body mass index, where DNA that predisposes people to obesity also raises the risk of depression.

As expected, many of the genes reported in the study have a role in how neurons grow, operate and send signals around the brain, where two regions known as the prefrontal cortex and the anterior cingulate cortex are the most important for depression.

Gerome Breen, a co-author on the paper, said that some of the gene variants they found are linked to neurotransmitters such as serotonin, which existing antidepressants work on. But other gene variants point to new biological mechanisms that the next generation of drugs might target.

“What we’ve had in recent decades is a shortage of new mechanisms that underlie depression and psychiatric disorders,” he said. “The hope is that in new data we identify new processes that can be targeted by newly developed types of drugs, which have different mechanisms of action to existing medications.”

It will take more research to confirm that the gene variants found in the study are really linked to depression. Many of the participants involved in the research self-reported depression, which is far less reliable than a clinical diagnosis. This means that some of the gene variants the scientists link to depression could turn out not to be involved in the disorder.

Jonathan Flint, who studies the genetics of depression at the University of California in Los Angeles said: “Our current treatments for depression are relatively ineffective – roughly speaking, only about half of patients improve – so we really need better therapies. To discover new treatments and to deliver the ones we have more effectively, we need a better understanding of what causes depression. Finding genetic risk variants is a way to do just that – the risk variants point to genes that are involved in the disease, and thus provide clues to how depression arises.”

 Thomasine, Sweden

http://fortune.com/2018/04/26/itunes-microsoft-store/

iTunes Finally Debuts in the Microsoft Store

By LISA MARIE SEGARRA

6:27 PM EDT

Apple and Microsoft have put aside their rivalry by introducing an iTunes app for certain PCs in the Microsoft Store.

The new app is available through Microsoft’s app store to download on PCs that use Windows 10 OS.

Last year, Microsoft announced that an iTunes app would be available in the Microsoft Store starting at the end of that year. It is now making good on that promise—albeit delayed—with an app that gives users the ability to buy music, TV shows, and films, and to listen to the Apple Music streaming service.

The new app is similar to what has long been available to most users of both Macs and PCs (PC users had to go to Apple’s website to download it). The move is most significant for users of Windows 10 S, which is a more streamlined version of Windows 10 meant for low-cost PCs and laptops used in classrooms, who could not use iTunes because their operating system only lets them download apps available in the Microsoft Store. Windows 10 S was released in May 2017 around the same time that plans for the iTunes app for the Microsoft Store was first announced.

Previously, iTunes was the most searched for app in the Microsoft store that wasn’t actually available through the service, according to The Verge.

https://news.ubc.ca/2018/04/26/mary-ellen-turpel-lafond-joins-ubc/

Mary Ellen Turpel-Lafond joins UBC as head of residential school centre and professor of law

UNIVERSITY NEWS

Mary Ellen Turpel-Lafond, a renowned Indigenous Canadian judge, lawyer and advocate for children and Indigenous restorative justice, has joined UBC as the inaugural director of the Indian Residential School History and Dialogue Centre (IRSHDC) and as a professor with the Peter A. Allard School of Law.

Turpel-Lafond, or Aki-kwe, is Cree and Scottish with kinship ties in First Nations in both Saskatchewan and Manitoba. She is recognized internationally for her pioneering work as British Columbia’s first Representative for Children and Youth. During her decade of advocacy for children and youth, she worked with First Nations and Métis families and communities across British Columbia, and has deep connections with Elders, individuals and community leaders working to address the legacy of residential schools for children and youth today by reforming child welfare, language revitalization and criminal justice innovation.

As a lawyer and provincial judge, Turpel-Lafond has been involved in projects relating to improving supports for Indigenous peoples, especially in addressing the unique circumstances and needs of children and youth involved in the justice system. More recently, Turpel-Lafond has represented Canada in work with the United Nations to advance child welfare change and issues of concern to Indigenous peoples.

Mary Ellen Turpel-Lafond

Mary Ellen Turpel-Lafond. Credit: Office of the Representative for Children and Youth

“Mary Ellen has been a tireless advocate for vulnerable children and for Indigenous rights in the legal system, making her the ideal candidate to lead the IRSHDC and an exemplary addition to the Allard School of Law,” said UBC President Santa J. Ono. “While the province’s Representative for Children and Youth, she was the voice for young people who were not able to speak up for themselves. As the director of IRSHDC, Mary Ellen will ensure that the voices and the experiences of people who suffered at Indian residential schools in their childhood for a century are articulated and understood.”

As director of the Indian Residential School History and Dialogue Centre, which officially opened on April 9, 2018, Turpel-Lafond will ensure the centre provides residential school survivors access to the records gathered by the Truth and Reconciliation Commission of Canada (TRC). She will coordinate programming for residential school survivors and initiatives to inform UBC faculty, staff, students and the public about the history and lasting effects of the Indian residential school system, and work with individuals, families and communities on addressing the continuing legacy of the schools. Her role will also assure that the centre fulfills its promise to serve as a leading location for the many forms of dialogue required to fully respond to the Calls to Action of the Canada Truth and Reconciliation Commission.

“It is crucial that the Indian Residential School History and Dialogue Centre be responsive to the needs, and reflect in a respectful way the lived experiences and trauma of residential school survivors. I am confident that Mary Ellen will honour survivors and ensure the centre contributes in a meaningful way to greater understanding of the effects of Indian residential schools have had on Indigenous communities and Canada as a whole,“ said Linc Kesler, strategic advisor to the president on aboriginal affairs and director of the First Nations House of Learning at UBC.

Turpel-Lafond will also join UBC as a professor with the Allard School of Law. Turpel-Lafond is a member of the Indigenous bar as well as the Law Societies of British Columbia, Nova Scotia and Saskatchewan. She was a Saskatchewan Provincial Court judge for 20 years (1998-2018). She has served as a mediator and negotiator on land claims, Indigenous and human rights matters, and worked in public law litigation. She is the author of more than 50 published works and reports.

“Mary Ellen’s experience and vision will ensure that this Centre lives up to its promises to residential school survivors. All of us at Allard School of Law are also very excited to welcome Mary Ellen as a mentor and leader in our school,” said Catherine Dauvergne, Dean of the Allard School of Law.

Turpel-Lafond’s two roles at UBC mark a return to academic life. Early in her career, she was a tenured law professor at the Schulich School of Law at Dalhousie. She has also instructed in a number of law schools across Canada and the United States. Turpel-Lafond holds a doctorate in law from Harvard Law School, a master’s in international law from Cambridge University, a JD from Osgoode Hall at York University and a bachelor of arts degree from Carleton University.

“This opportunity to deepen the dialogue and respond to the legacy of residential schools is historic,” said Turpel-Lafond. “What happened with these schools, and the policies that permitted them to flourish, must never be forgotten or set-aside.

“The university community at UBC, the learning community nationally, and civil society, must continue to engage with survivors, families and communities on their experiences, thus ensuring the legacy is critically examined and intergenerational consequences are understood and addressed.  With understanding will come dialogue on necessary actions for recognizing and respecting Indigenous peoples’ human rights and the revitalization of Indigenous languages, education systems, laws, cultures and self-determination. The child welfare issues alone are a matter recognized recently by the Government of Canada as a humanitarian crisis experienced today but connected to the legacy of these schools.”

Turpel-Lafond has received numerous accolades for her work, including the International Society of Adoptable Children’s Lifetime Achievement Award in 2017 for her work in promoting adoption and kinship placement in BC. That same year, she was also awarded the President’s Award from the BC Government Employees’ Union for her leadership around children’s services and human rights.

Turpel-Lafond started her role as a professor in the Allard School of Law on March 23 and assumes her director role on June 1.

https://phys.org/news/2018-04-arctic-messageclimate-big.html

Melting Arctic sends a message—climate change is here in a big way

April 26, 2018 by Mark Serreze, The Conversation
Melting Arctic sends a message—climate change is here in a big way
Scientists on Arctic sea ice in the Chukchi Sea, surrounded by melt ponds, July 4, 2010. Credit: NASA/Kathryn Hansen

Scientists have known for a long time that as climate change started to heat up the Earth, its effects would be most pronounced in the Arctic. This has many reasons, but climate feedbacks are key. As the Arctic warms, snow and ice melt, and the surface absorbs more of the sun’s energy instead of reflecting it back into space. This makes it even warmer, which causes more melting, and so on.

This expectation has become a reality that I describe in my new book “Brave New Arctic.” It’s a visually compelling story: The effects of  are evident in shrinking  and glaciers and in Alaskan roads buckling as permafrost beneath them thaws.

But for many people the Arctic seems like a faraway place, and stories of what is happening there seem irrelevant to their lives. It can also be hard to accept that the globe is warming up while you are shoveling out from the latest snowstorm.

Since I have spent more than 35 years studying snow, ice and cold places, people often are surprised when I tell them I once was skeptical that human activities were playing a role in . My book traces my own career as a  scientist and the evolving views of many scientists I have worked with. When I first started working in the Arctic, scientists understood it as a region defined by its snow and ice, with a varying but generally constant climate. In the 1990s, we realized that it was changing, but it took us years to figure out why. Now scientists are trying to understand what the Arctic’s ongoing transformation means for the rest of the planet, and whether the Arctic of old will ever be seen again.

Arctic sea ice has not only been shrinking in surface area in recent years – it’s becoming younger and thinner as well.

Evidence piles up

Evidence that the Arctic is warming rapidly extends far beyond shrinking ice caps and buckling roads. It also includes a melting Greenland ice sheet; a rapid decline in the extent of the Arctic’s floating sea ice cover in summer; warming and thawing of permafrost; shrubs taking over areas of tundra that formerly were dominated by sedges, grasses, mosses and lichens; and a rise in temperature twice as large as that for the globe as a whole. This outsized warming even has a name: Arctic amplification.

The Arctic began to stir in the early 1990s. The first signs of change were a slight warming of the ocean and an apparent decline in sea ice. By the end of the decade, it was abundantly clear that something was afoot. But to me, it looked like natural climate variability. As I saw it, shifts in wind patterns could explain a lot of the warming, as well as loss of sea ice. There didn’t seem to be much need to invoke the specter of rising greenhouse gas levels.

In 2000 I teamed up with a number of leading researchers in different fields of Arctic science to undertake a comprehensive analysis of all evidence of change that we had seen and how to interpret it. We concluded that while some changes, such as loss of sea ice, were consistent with what climate models were predicting, others were not.

Melting Arctic sends a message—climate change is here in a big way
Collapsed block of ice-rich permafrost along Drew Point, Alaska, at the edge of the Beaufort Sea. Coastal bluffs in this region can erode 20 meters a year (around 65 feet). Credit: USGS

To be clear, we were not asking whether the impacts of rising greenhouse gas concentrations would appear first in the Arctic, as we expected. The science supporting this projection was solid. The issue was whether those impacts had yet emerged. Eventually they did – and in a big way. Sometime around 2003, I accepted the overwhelming evidence of human-induced warming, and started warning the public about what the Arctic was telling us.

Seeing is believing

Climate change really hit home for me when when I found out that two little ice caps in the Canadian Arctic I had studied back in 1982 and 1983 as a young graduate student had essentially disappeared.

Bruce Raup, a colleague at the National Snow and Ice Data Center, has been using high-resolution satellite data to map all of the world’s glaciers and ice caps. It’s a moving target, because most of them are melting and shrinking – which contributes to .

Melting Arctic sends a message—climate change is here in a big way
Hidden Creek Glacier, Alaska, photographed in 1916 and 2004, with noticeable ice loss. Credit: S.R. Capps, USGS (top), NPS (bottom)

One day in 2016, as I walked past Bruce’s office and saw him hunched over his computer monitor, I asked if we could check out those two ice caps. When I worked on them in the early 1980s, the larger one was perhaps a mile and a half across. Over the course of two summers of field work, I had gotten to know pretty much every square inch of them.

When Bruce found the ice caps and zoomed in, we were aghast to see that they had shrunk to the size of a few football fields. They are even smaller today—just patches of ice that are sure to disappear in just a few years.

Today it seems increasingly likely that what is happening in the Arctic will reverberate around the globe. Arctic warming may already be influencing weather patterns in the middle latitudes. Meltdown of the Greenland ice sheet is having an increasing impact on sea level rise. As permafrost thaws, it may start to release carbon dioxide and methane to the atmosphere, further warming the climate.

I often find myself wondering whether the remains of those two little ice caps I studied back in the early 1980s will survive another summer. Scientists are trained to be skeptics, but for those of us who study the Arctic, it is clear that a radical transformation is underway. My two ice caps are just a small part of that story. Indeed, the question is no longer whether the Arctic is warming, but how drastically it will change – and what those changes mean for the planet.

 

https://www.thecipherbrief.com/article/tech/artificial-intelligence-welcome-age-disruptive-surprise

Artificial Intelligence: Welcome to the Age of Disruptive Surprise

With the last few years of progress in artificial intelligence, it is hard to look forward without excitement…and apprehension. If you are paying attention at all, you are wondering what lies ahead. Will it be great? Will it be disastrous for many? Surely because we are inventing it, we have a good sense of what is being spawned? But no, this technology feels different. It almost feels like it is being discovered, rather than being invented. It will be big, it will impact our lives and our society. It will be disruptive…and we don’t know how.

I spent a career in intelligence learning the business of forecasting and warning, and I teach those things today. I learned that warning is easier than forecasting—usually you warn of vulnerabilities and possibilities, but you forecast likelihoods. Likelihoods are much harder to determine. On forecasting, I learned the hard way to be very humble. I learned that the word “probably” is overused…and the words “almost certainly” are rarely deserved when we are talking about anything over the horizon.

The difference between warning and forecasting plagues discussion about artificial intelligence. When such visionaries as Stephen Hawking and Elon Musk warn of what could happen…the world-changing hazards that might come with advanced artificial intelligence—“superintelligence,” to use Nick Bostrom’s term—it is worth paying serious attention. But when it comes to forecasting what will happen, it is easy to feel helpless in choosing between such credentialed observers as Ray Kurzweil and Rodney Brooks. Kurzweil, Google’s in-house futurist and director of engineering, calculates we will see a computer pass the Turing test – convincingly mimicking human intelligence – by 2029, and nanobots in the human brain will connect it to the cloud by the late 2030s. Brooks, former director of MIT’s Computer Science and AI Laboratory, says, “Mistaken predictions lead to fears of things that are not going to happen, whether it’s the wide-scale destruction of jobs…or the advent of AI that has values different from ours and might try to destroy us.”

I agree that there are many reasons to warn about where we may be headed with Artificial Intelligence. I believe this is the most important leap in technology since man discovered how to harness fire, and we are still struggling with fire’s demon side. Nearly every aspect of our lives is being touched by digital technologies, and everything digital will be affected by artificial intelligence. Our leaders’ decision-making in economics, law enforcement, and warfare will be accelerated by artificial intelligence, eventually being accelerated to a point where humans often will stand aside.

Waiting for human orders—waiting for our own plodding ability to perceive, grasp, and react—could mean losing a life-and-death competition. Our society, commerce, governance, and statecraft were built for the analog/industrial world. I see no sign they will catch up to the realities of a digital world, much less an AI-accelerated world, until after disasters wake us.

But the forecaster in me yearns to move beyond such warnings. When considering our likely future in an AI-accelerated world, what can I offer? I’m not even tempted to forecast the progress of the technology. Rather, from a career in supporting national security decision-making, my mind goes to that arena of our AI-accelerated future. When weighing the current pace of advances in autonomous systems, machine cognition, artificial intelligence, and machine-human partnership, here is the very short list of things I can forecast with confidence:

  • We will be surprised…strategically and repeatedly. It won’t be because we lack imagination—science fiction writers are doing important work framing the possibilities and making them relatable. It will be because we will have difficulty sorting the likely from the merely possible.
  • Not all the surprises will be negative. Indeed, most are likely to be positive. In science and technology, we tend to call positive surprises “breakthroughs.” But the bigger the breakthrough, the bigger the disruption. The nearest example might be the challenge for our economy to absorb more than 3 million truckers as driverless trucks become a reality. Imagine the jolt to our current economy with breakthroughs in clean, renewable energy, in curing cancer, or when we finally get flying cars.
  • Our nature to invest reactively rather than preemptively will continue to dominate our decision-making, so we will lurch from disruption to disruption. (Our current struggle with massive hacking events is instructive here.) Preempting the most dire possibilities will seem too expensive, too divisive or too different from today’s norms.
  • The disruptions will make us all wish we could slow the pace of change in this arena, but that will not feel like a viable option. We will always have our eye on competitors who are less sensitive to the human cost of AI-driven disruption.

As the visionaries and practitioners argue about what AI will and won’t be able to do, no controversy is more important than whether AI will pass the level of human cognition. Will it acquire the ability to understand…the ability to reason? This is the AI that “wakes up,” becomes self-aware, and I see no physical law prohibiting that milestone. We will see AI have enormous impact even short of that threshold, but the greatest dangers and opportunities lie on the other side of its awakening.

But forecasting that breakthrough—whether or when it will happen—seems out of reach for now. An artificial intelligence that is conscious is a good subject for warning, but a poor subject for forecasting, at least until we have a better notion of what “consciousness” is. If it happens, it will be a pivot-point in human history.

As I learned in intelligence, when forecasting is impossible but the stakes are high, it is often worthwhile to at least identify key indicators to watch for. As I watch the development of AI, there are three particular breakthroughs that I am watching for…three milestones in cognition that may foreshadow human-level reasoning:

  • When does a machine first appreciate the very special thing that is a fact? In logic, a single fact can trump a mountain of source material and pages of exquisite reasoning. Currently, no machine I know of can discern a fact from an assertion. My thermostat determines a fact—that the temperature in my den is 70° — but it doesn’t recognize it as one. If it had conflicting sources of temperature information, it would be flummoxed. The ability to sort fact from assertion is especially important because most scenarios of a super-intelligent AI breaking out of control have it first connecting to the internet as its source of unbounded knowledge and control. Unless anchored in facts, it would find the sheer volume of Elvis sightings convincing.
  • When does a machine make something of “the dog that didn’t bark?” In reasoning—in testing hypotheses—the absence of something that should be there can be meaningful to analysts. Right now, machine cognition is straining to make something of the information that it is fed. It doesn’t spend cycles noticing that something should be there but isn’t.
  • When does a machine deal with the question, “Why?” I don’t mean here that machines will continue to be unable to explain why they produced a particular answer. (We are likely to be able to teach them to show us how they developed the answer, and that will be a big deal in helping us to partner with AI.) But “Why?” is bigger than that more mechanical question of “How?” The human mind is “wired” to ask “Why?” It is essential to understanding cause and effect. The ape might understand that fire burns without understanding why it burns. But “Why?” is the question that allows the caveman to harness and create fire.

Warning, forecasting, and tracking indicators will be vital in harnessing this very disruptive technology. They may help us to react more productively to the disruptions when they come, perhaps restraining our human tendency to over-react to bad surprises. I also agree with MIT’s Max Tegmark, head of the Future of Life Institute, that the best way to avoid the most catastrophic possibilities of super-intelligent computers is to start shaping that future now. We have enough indicators already of AI’s awesome potential to start framing choices.

Bruce E. Pease is a former analyst and leader of analysis, with 30 years in the US intelligence community, including leading analysis on strategic weapons and emerging technologies. He served as Director of Intelligence Programs on the National Security Council in the Clinton Administration. He is also an experienced teacher of leadership, analysis, and ethics, and author of the forthcoming book, Leading Analysis. The views expressed here are his own and do not reflect the official views of CIA or any agency.

https://globenewswire.com/news-release/2018/04/25/1487062/0/en/Ray-Kurzweil-Head-of-Engineering-at-Google-LLC-uses-ARHT-Media-s-Holographic-Telepresence-Technology.html

Ray Kurzweil, Head of Engineering at Google LLC, uses ARHT Media’s Holographic Telepresence Technology

TORONTO, April 25, 2018 (GLOBE NEWSWIRE) — ARHT Media Inc. (“ARHT” or “the Company”) (TSXV:ART), a global leader in the development, production and distribution of high quality hologram content through its patented Augmented Reality Holographic Telepresence technology, is pleased to announce that on April 21, 2018, Mr. Ray Kurzweil, Head of Engineering at Google LLC, was successfully holoported from Boston, MA to the HIGH-TECH NATION forum in Minsk, Belarus.

Mr. Kurzweil, a keynote speaker at the forum, was holoported from Boston, USA to Minsk, Belarus using ARHT’s Holographic Telepresence technology for a 90 minute session to present in front of more than 3,000 attendees in real-time during an interactive session with the audience, including a Q&A session.

“We are extremely excited to have had the opportunity to work with Ray and showcase the power of our technology which enabled him to present at a venue which he couldn’t physically attend,” commented ARHT CEO, Larry O’Reilly. “Not only did we enable him to keep his commitment as keynote speaker, we also provided the audience the opportunity to enjoy a live Q&A session with Ray as if he was physically present.”

About ARHT Media

ARHT’s patented Augmented Reality Holographic Telepresence technology is the world’s first complete end-to-end solution for the creation, transmission, and delivery of lifelike digital human holograms, known as HumaGrams™. The Company’s technology is protected by U.S. Patent No. 9,581,962.

Connect with ARHT Media

Twitter:  http://www.twitter.com/ARHTmedia
Facebook:  http://www.facebook.com/ARHTmediainc
LinkedIn: http://www.linkedin.com/company/arht-media-inc-

For more information, please visit http://www.arhtmedia.com/ or contact the investor relations group at info@arhtmedia.com.

ARHT Media trades under the symbol “ART” on the Toronto Venture Stock Exchange.

ARHT Media Investor Contact:

Ali Mahdavi
am@spinnakercmi.com

ARHT Media Press Contact

Salman Amin
samin@arhtmedia.com

This press release contains “forward-looking information” within the meaning of applicable Canadian securities legislation. Forward-looking information includes, but is not limited to, the Company’s technology; the potential uses for the Company’s technology; the future planned events using the Company’s technology; the future success of the Company; the ability of the Company to monetize the HumaGram™ technology; the development of the Company’s technology; use of the Company’s technology by Ray Kurzweil; and interest from parties in ARHT’s products. Generally, forward looking information can be identified by the use of forward-looking terminology such as “plans”, “expects” or “does not expect”, “is expected”, “budget”, “scheduled”, “estimates”, “forecasts”, “intends”, “anticipates” or “does not anticipate”, or “believes”, or variations of such words and phrases or state that certain actions, events or results “may”, “could”, “would”, “might” or “will be taken”, “occur” or “be achieved”. Forward-looking information is subject to known and unknown risks, uncertainties and other factors that may cause the actual results, level of activity, performance or achievements of the Company to be materially different from those expressed or implied by such forward-looking information, including but not limited to: general business, economic and competitive uncertainties; regulatory risks; risks inherent in technology operations; and other risks of the technology industry. Although the Company has attempted to identify important factors that could cause actual results to differ materially from those contained in forward-looking information, there may be other factors that cause results not to be as anticipated, estimated or intended. There can be no assurance that such information will prove to be accurate, as actual results and future events could differ materially from those anticipated in such statements. Accordingly, readers should not place undue reliance on forward-looking information. The Company does not undertake to update any forward-looking information, except in accordance with applicable securities laws.

https://9to5mac.com/2018/04/26/photos-claim-to-show-redesigned-iphone-se-2-with-glass-back-for-wireless-charging-headphone-jack-remains/

Photos claim to show redesigned iPhone SE 2 with glass back for wireless charging, headphone jack remains

The iPhone SE 2 rumor train has been a wild one. There are many conflicting reports about the device, ranging from rumors of a major chassis change to only a minor hardware revision.

These new photos purport to show a fully built iPhone SE 2, with resemblances to some leaked shells first seen in March. They depict a device somewhere in between the two extremes of the rumored specs; a new glass back for wireless charging but a 3.5mm hole for a headphone jack remains.

As ever, these images come from Chinese social media so their provenance is sketchy. However, we have seen strong indicatorsthat an iPhone SE 2 hardware update is indeed imminent.

As a reminder, the current iPhone SE has an aluminum back design with small glass panels at the top and bottom only.

These photos show a full glass back device, which would presumably enable wireless charging. The rear markings reflect the latest generation of US iPhones, which only feature the word ‘iPhone’ and lack regulatory FCC logos.

Disagreeing with a report from last week, the pictured chassis clearly retains a 3.5mm headphone jack. Apple dropped the port in late 2016 with the iPhone 7 and iPhone 7 Plus, but the rumor mill has not come to a consensus agreement on whether the upgraded SE will keep it or lose it.

While the pictures indicate that the iPhone SE 2 will include some design tweaks on the rear side, the front of the phone looks almost identical to the current iPhone SE. Same Touch ID sensor, same display, same front camera layout and same bezels.

In regard to internals, various reports point to an upgraded processor (probably moving from an A9 to a A10 chip) and other component tweaks.

We are expecting some kind of new iPhone announcement sometime in May, perhaps at WWDC in June, probably with a new website press-release and online store update. Apple’s flagship 2018 iPhones — such as the iPhone X Plus — are scheduled for a fall debut.


Check out 9to5Mac on YouTube for more Apple news:

https://www.windowscentral.com/how-build-raspberry-pi-powered-nas

How to build a Raspberry Pi-powered NAS

Forget spending hundreds on a pre-assembled NAS solution. Do it yourself with Raspberry Pi!

Raspberry Pi 2

There are some awesome Network Attached Storage (NAS) solutions out there that can be bought, setup and accessed within an hour. The downside to these devices is the cost, which can be upwards of $1,000 depending on what you need from connected storage. Luckily, if you haven’t quite got enough cash to spare, or wish to build one yourself, it’s easy to do with a Raspberry Pi.

We’re big fans of the Raspberry Pi. They’re excellent pieces of kit that offer everyone the opportunity to purchase a ready-to-go micro PC that can do almost anything — within reason. One such use of the newest Raspberry Pi 3 (which can be purchased for around $40) is to run a home or office-based NAS.

Pros and cons of Raspberry Pi NAS

Raspberry Pi 2

There’s no true “best” option for everyone when it comes to a NAS. There are many factors to consider, including price, storage capacity, features, noise, and power consumption. Solutions by Synology and other companies are sound choices if you wish to simply plug and play but they come with a few limitations. A Raspberry Pi is vastly more affordable, allows you to fine-tune how it’s all set up, is ideal for those who wish to not rely on company support, and it lets you master a NAS OS.

But there are some downsides. The Ethernet port is rather slow on older models, though the latest 3 Model B+ sports Gigabit and can, therefore, match pricier NAS units. There’s also the lack of space within enclosures. When you install the small PC into a case, you don’t have any space for a hard-disk drive (HDD), which means you’ll need an external enclosure. Finally, there’s USB 2.0 and a learning curve.

What you’ll need

Raspberry Pi 3

Here’s everything you will need to get your Raspberry Pi-powered NAS up and running:

Getting started

OpenMediaVault

While you can choose any OS that can work with a Raspberry Pi, we’re going with OpenMediaVault. Here’s how:

  1. Download OMV for Raspberry Pi onto a PC.
  2. Create a bootable USB drive with the ISO image.
  3. Connect the external hard drives to the Raspberry Pi.
  4. Plug the drive into the Raspberry Pi and switch it on.
  5. Select Install from the menu.
  6. Carefully follow the install wizard (each step is well explained).
  7. After updates have been installed, the server will reboot.
  8. Wait for OMV to finish booting.
  9. Login on the NAS, using the command line (type “root” and hit enter):
    • User: root.
    • Password: What you set during install.
  10. Run the command ifconfig to view the set IP address.
  11. Access the web interface by using the IP address in a browser on a PC.
  12. Log in to the web interface:
    • User: admin
    • Password: openmediavault

You will now have full access to OMV on the Raspberry Pi and can configure it how you see fit, much like any NAS unit you can purchase.

OpenMedia Vault

There are also some things you should do after that:

  • Change the web password.
  • Activate various protocols, including SSH, SMB, FTP.
  • Create file systems for each drive or partition you wish to use.
  • Add users for friends and family.
  • Create shared folders.
  • Install plug-ins (like the excellent Plex Media Server).

https://gizmodo.com/ai-is-getting-pretty-good-at-studying-distant-galaxies-1825513242

AI Is Getting Pretty Good at Studying Distant Galaxies

Three simulated galaxies grouped by a neural network on top, followed by three real galaxies in the corresponding buckets on the bottom
Image: Top row: Greg Snyder, Space Telescope Science Institute, and Marc Huertas-Company, Paris Observatory. Bottom Row: Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS).

Will all the warped images and funny names, it can be easy to forget that machine learning can have important uses in science—specifically, when it comes to categorizing things. Scientists have lately been putting a neural network to good use identifying distant galaxies.

An international team of researchers point out that there are a buttload of space pictures out there, from both the nearby and distant universe. But more surveys are around the corner that will have tons more data—more than humans can effectively sift through. It can be tough to synthesize this data, and connect the dots between young and old galaxies. That’s where neural networks come in.

“Once we’ve trained a computer on many thousands of images from our simulations, the computers can see things that we just can’t,” Joel Primack, distinguished professor of physics emeritus from the University of California, Santa Cruz, told Gizmodo. “That’s very helpful.”

The researchers started with a powerful simulation to create 35 model galaxies, then used further software to create around 10,000 images, both clear and fuzzed up. They trained a neural network on the images in order to identify their similarities. The researchers then fed the trained network real data—images of distant galaxies from the CANDELS survey. It successfully lumped the galaxies into three categories based on their shape. These categories correspond to three phases in galactic evolution, which they call the pre-blue nugget phase, the blue nugget phase, and the post-blue nugget phase.

Basically, after feeding the neural network images of simulated galaxies, the researchers were able to get useful information about real galaxies. That’s pretty cool.

A neural network would obviously be super helpful for large-scale surveys. The Wide Field Infrared Survey Telescope, which would launch in the 2020s, could capture millions of galaxies at Hubble’s resolution in single images. The Large Synoptic Survey Telescope will image a huge chunk of the Southern sky every night from Earth, and could record 15 terabytes of data each day. A neural network could quickly identify the things that stand out that might be of most interest to astronomers, or point out things a human eye might miss.

Others are excited. “There’s a hope among some researchers that if artificial intelligence can sort through astronomical data, classify it, and tell us about interesting things that it finds, then the human capacity for learning about the universe is expanded beyond what we imagine we’re capable of alone,” said Michael Oman-Reagan, PhD candidate at Memorial University in Newfoundland and Labrador, Canada, who researches exploration beyond the solar system and the potential for extraterrestrial life.

Perhaps machine learning could help humans in the search for extraterrestrial intelligence, he thought.

This is exciting stuff, but Primack warned me to be cautious—this is just a proof-of-concept. These training sets could take a great many hours worth of processing time to generate, so they’re not realistic ways to categorize the data just yet. On top of that, the simulations might still be too limited to fully capture the diversity of galaxies, according to the paper slated for publication in The Astrophysical Journal.

But things are progressing, and Primack’s isn’t the only team working on this. Others are also training computers on large galactic datasets, and Primack offered a shoutout to the Feedback in Realistic Environmentscollaboration who are making realistic galactic models.

Ultimately, it shouldn’t be AI’s job to replace scientists, but instead help them manage the incredible amounts of data recorded by the newest observatories.

https://www.nature.com/articles/d41586-018-04979-4

Billion-star map of Milky Way set to transform astronomy

European Gaia spacecraft’s first major data dump — the most detailed 3D chart yet of our Galaxy — will keep researchers busy for decades.
ESA’s Gaia mission has produced the richest star catalogue to date.

The Milky Way galaxy has been charted by the Gaia mission in unprecedented detail. Credit: ESA/Gaia/DPAC

After a feverish wait, astronomers around the world have an ocean of new information to throw themselves into. This afternoon in Europe, the European Space Agency’s (ESA) Gaia mission published its first fully 3D map of the Milky Way.

The data haul includes the positions of nearly 1.7 billion stars, and the distance, colours, velocities and directions of motion of about 1.3 billion of them. Together they form an unprecedented live movie of the sky, covering a volume of space 1,000 larger than previous surveys have (see ‘Gaia’s gold’). “In my professional opinion, this is crazy awesome,” says Megan Bedell of the Center for Computational Astrophysics in New York, one of the many astronomers who will conduct studies based on the data set. “I think the whole community is eager to dive in.”

Within hours of the catalogue going online, 3,000 users from around the world had already started downloading the data, ESA said in a tweet.

“We’re very curious to see what the community will do with it,” says Anthony Brown, an astronomer at the Leiden Observatory in the Netherlands who chairs Gaia’s data-processing collaboration.

At an event to present the Gaia catalogue at the Royal Astronomical Society in London, astronomer Gerry Gilmore of the University of Cambridge, UK, presented a striking video that extrapolated on the Gaia data to simulate the future motion of millions of stars. “Everything moves,” he said.

Picture: S. Brunier/ESO; Graphic source: ESA

The 2-tonne, €1-billion Gaia mission launched in late 2013 and began collecting scientific data in July 2014. From a vantage point beyond our planet, Gaia tracks Earth in its orbit around the Sun. It makes repeated measurements to estimate the distances of stars — and other celestial objects — using a technique called parallax (see ‘The parallax effect’).

As well as making its 551-gigabyte database available today, the Gaia team also released a number of scientific papers. The main goal of these was to describe quality checks the researchers did on the data and demonstrate how it can be used: the mission’s policy is to make the catalogue immediately available to the broader community, rather than reserve it for the team’s own science studies first.

Still, the Gaia papers also described a wealth of original findings, said Floor van Leeuwen, another senior Gaia scientist from Cambridge who was at the press briefing. For example, he showed how Gaia proved for the first time that certain star clusters puff up at the same time as large stars sink to their centres. “We weren’t allowed to make discoveries, but we couldn’t avoid making them,” he said.

Source: CSIRO

One of those findings has implications far beyond the Milky Way. Some astronomers are especially eager to see Gaia’s measurements of certain types of variable star that are used as ‘standard candles’ of cosmology. Knowing the precise distances of stars of this type in the Milky Way makes them useful as yardsticks for measuring distances to galaxies much farther away. In particular, astronomers use standard candles to estimate how fast the Universe is expanding, but in recent years their measurement has been in apparent contradiction — or “tension”, as scientists say — with predictions based on maps of the cosmic microwave background, the afterglow of the Big Bang. A preliminary look at the Gaia data shows that the measurements of the standard candles are now more precise, Gimore said at the press briefing. But, he adds, “at face value, the tension is still there”.

Dozens of preprints are likely to appear in the next few days, Gilmore says, as teams around the world download Gaia data and run them through algorithms they have been honing for years in preparation. For example, researchers will be able to test models for how the Galaxy formed through mergers of smaller galaxies; measure the distribution of dark matter; and refine their theories for how stars evolve as they burn through their reserves of nuclear fuel.

Denis Erkal, an astronomer at the University of Surrey in Guildford, UK, and his collaborators plan to use Gaia data to weigh the Large Magellanic Cloud, the largest of the dwarf galaxies orbiting the Milky Way. They will do so by measuring tidal motions in our Galaxy’s stars that are caused by the dwarf galaxy — a bit like weighing the Moon by measuring its effects on Earth’s oceans.

Gaia had released a preliminary catalogue in 2016, but at that time it did not have enough data yet to directly measure the distances of many stars. Further data releases will contain more and more information and will enable entirely new kinds of studies (the next one is in 2020). Some researchers expect to discover tens of thousands of exoplanets by watching stars wobble under their planets’ gravitational pull — but the probe must collect several years’ more data for these motions to become apparent. Others will inspect similar wobbles in search of evidence of the passage of gravitational waves. In addition to tracking stars, the probe has monitored asteroids and will help scientists to monitor bodies in the Solar System that might look on their way to hitting might be on a collision trajectory with Earth.

A technical glitch in February temporarily sent Gaia into ‘safe mode’, but the probe is in overall good health, says project scientist Timo Prusti at ESA’s European Space Research and Technology Centre in Noordwijk, the Netherlands. If nothing breaks down and if ESA continues extending the mission, Gaia has enough fuel to keep operating until 2024, for a total of ten years, he says.