http://www.netnewsledger.com/2017/10/20/much-screen-time-not-enough-sleep/

Too Much Screen Time – Not Enough Sleep

New study of adolescents suggests that obtaining an insufficient amount of sleep increases variability in sadness, anger, energy and feelings of sleepiness.

A new study of adolescents suggests that obtaining an insufficient amount of sleep increases variability in sadness, anger, energy, and feelings of sleepiness.

THUNDER BAY – TECH – If you’re a young person who can’t seem to get enough sleep, you’re not alone: A new study led by San Diego State University Professor of Psychology Jean Twenge finds that adolescents today are sleeping fewer hours per night than older generations. One possible reason? Young people are trading their sleep for smartphone time.

Most sleep experts agree that adolescents need 9 hours of sleep each night to be engaged and productive students; less than 7 hours is considered to be insufficient sleep. A peek into any bleary-eyed classroom in the country will tell you that many youths are sleep-deprived, but it’s unclear whether young people today are in fact sleeping less.

To find out, Twenge, along with psychologist Zlatan Krizan and graduate student Garrett Hisler–both at Iowa State University in Ames–examined data from two long-running, nationally representative, government-funded surveys of more than 360,000 teenagers. The Monitoring the Future survey asked U.S. students in the 8th, 10th, and 12th grades how frequently they got at least 7 hours of sleep, while the Youth Risk Behavior Surveillance System survey asked 9th-12th-grade students how many hours of sleep they got on an average school night.

Combining and analyzing data from both surveys, the researchers found that about 40% of adolescents in 2015 slept less than 7 hours a night, which is 58% more than in 1991 and 17% more than in 2009.

Delving further into the data, the researchers learned that the more time young people reported spending online, the less sleep they got.

Teens who spent 5 hours a day online were 50% more likely to not sleep enough than their peers who only spent an hour online each day.

Beginning around 2009, smartphone use skyrocketed, which Twenge believes might be responsible for the 17% bump between 2009 and 2015 in the number of students sleeping 7 hours or less. Not only might teens be using their phones when they would otherwise be sleeping, the authors note, but previous research suggests the light wavelengths emitted by smartphones and tablets can interfere with the body’s natural sleep-wake rhythm. The researchers reported their findings in the journal Sleep Medicine.

“Teens’ sleep began to shorten just as the majority started using smartphones,” said Twenge, author of iGen: Why Today’s Super-Connected Kids Are Growing Up Less Rebellious, More Tolerant, Less Happy–And Completely Unprepared for Adulthood. “It’s a very suspicious pattern.”

Students might compensate for that lack of sleep by dozing off during daytime hours, adds Krizan.

“Our body is going to try to meet its sleep needs, which means sleep is going to interfere or shove its nose in other spheres of our lives,” he said. “Teens may catch up with naps on the weekend or they may start falling asleep at school.”

For many, smartphones and tablets are an indispensable part of everyday life, so the key is moderation, Twenge stresses. Limiting usage to 2 hours a day should leave enough time for proper sleep, she says. And that’s valuable advice for young and old alike.

“Given the importance of sleep for both physical and mental health, both teens and adults should consider whether their smartphone use is interfering with their sleep,” she says. “It’s particularly important not to use screen devices right before bed, as they might interfere with falling asleep.”

https://motherboard.vice.com/en_us/article/kz7jem/silicon-valley-digitalism-machine-religion-artificial-intelligence-christianity-singularity-google-facebook-cult

Silicon Valley’s Radical Machine Cult

Wolfram Klinger

From afterlife to machine transcendence, Digitalism offers a new promise of paradise.

http://www.independent.co.uk/life-style/gadgets-and-tech/news/alphago-zero-go-deepmind-ai-artificial-intelligence-google-machine-learning-human-knowledge-a8009801.html

Chinese Go player Ke Jie reacts during his second match against Google’s artificial intelligence program AlphaGo at the Future of Go Summit in Wuzhen, Zhejiang province, China May 25, 2017 / REUTERS/Stringer

It became one of the greatest Go players of all time in a matter of days, when humans have been playing the game for thousands of years

Google has developed a computer program that teaches itself.

The company’s AI division, DeepMind, has unveiled AlphaGo Zero, an extremely advanced system that managed to accumulate thousands of years of human knowledge within days.

DeepMind says it’s the most powerful program it has created, because it isn’t “constrained by the limits of human knowledge”.

AlphaGo Zero is the latest evolution of AlphaGo, the first computer program to ever defeat a world champion at the ancient Chinese game of Go.

Unlike previous versions of AlphaGo, however, Zero was only provided with the rules of the game. It had to learn how to play all by itself, whereas the others were trained using data from thousands of games played by humans.

After just three days, AlphaGo Zero took on the version of AlphaGo that defeated 18-time world champion Lee Sedol. Zero won comprehensively, by 100 games to nil.

After 40 days, it competed against AlphaGo Master, the version that conquered Ke Jie, the world’s top player. Zero won by 89 games to 11.

 

“Previous versions of AlphaGo initially trained on thousands of human amateur and professional games to learn how to play Go,” wrote DeepMind in a blog post.

 

“AlphaGo Zero skips this step and learns to play simply by playing games against itself, starting from completely random play.”

Go has been played by humans for thousands of years, yet within a matter of days, a computer program was able to learn the game from scratch and become, quite possibly, the greatest player of the game the world has ever seen.

Though it started off like a human beginner, Zero quickly improved and even developed “unconventional” strategies and new moves that humans have never thought of before.

“Artificial intelligence research has made rapid progress in a wide variety of domains from speech recognition and image classification to genomics and drug discovery. In many cases, these are specialist systems that leverage enormous amounts of human expertise and data,” said DeepMind.

“However, for some problems this human knowledge may be too expensive, too unreliable or simply unavailable. As a result, a long-standing ambition of AI research is to bypass this step, creating algorithms that achieve superhuman performance in the most challenging domains with no human input.”

AlphaGo Zero is seen as a significant step towards that goal, and its creators are confident that they’ll be able to use similar techniques to tackle major real-world problems in the future.

“If similar techniques can be applied to other structured problems, such as protein folding, reducing energy consumption or searching for revolutionary new materials, the resulting breakthroughs have the potential to positively impact society,” said DeepMind.

However, this will present far more difficult challenges than creating AlphaGo Zero. Go is governed by a finite set of rules, making it much easier to understand and master than more abstract issues. Still, it’s an enormous step.

“By not using human data — by not using human expertise in any fashion – we’ve actually removed the constraints of human knowledge,” said AlphaGo Zero’s lead programmer, David Silver, reports to the Verge.

“It’s therefore able to create knowledge itself from first principles; from a blank slate.”

The research paper has been published in the science journal Nature.

https://www.inverse.com/article/37531-google-smart-reply-turing

I Used Gmail’s ‘Smart Reply’ for Every Email for a Week

Google wants to automate your email writing, and it looks like it has a decent shot at doing so. After using its new Smart Reply automated responses — which landed on Gmail for iOS and Android in May — for a week, I came away from it convinced artificial intelligence can replicate my voice. Whether that’s actually kind of creepy is a whole other question, though.

“Smart Reply utilizes machine learning to give you better responses the more you use it,” Greg Bullock, software engineer at Gmail, said at the time of the feature’s arrival on iOS. “So if you’re more of a “thanks!” than a “thanks.” person, we’ll suggest the response that’s, well, more you!”

So I decided to try it out for myself. For one week, I would use the iOS app to put Smart Reply through its paces. The rules were simple: use one of the three suggested responses as often as possible. If necessary, add more information to those replies before hitting “send.” Only switch to responding from scratch in the rarest of circumstances.

Depressingly, it worked very well.

The app captured my tone, picking up on British turns of phrase like nice oneand brilliant and offering them back as responses. If an email offered a choice, Google would suggest replies based around those. Am I really that predictable? Yes, and so are all of you.

Google’s system is built on hierarchical models of language, which break down sentences into discrete components that give clues about a appropriate response. For example, saying that somebody gave you a “glance” can be positive or negative, and the other components are analyzed to determine the context for the word. The system also looks at whether words appear in the subject or the message body, shaping the response depending on the current conversation:

A side-by-side comparison of how responses can vary.
A side-by-side comparison of how responses can vary.

To be clear, there is some advanced tech under the hood. The system uses machine learning algorithms to craft replies. When the feature launched in its first form back in 2015, senior research scientist Greg Corrado explained why this is necessary:

A naive attempt to build a response generation system might depend on hand-crafted rules for common reply scenarios. But in practice, any engineer’s ability to invent “rules” would be quickly outstripped by the tremendous diversity with which real people communicate. A machine-learned system, by contrast, implicitly captures diverse situations, writing styles, and tones. These systems generalize better, and handle completely new inputs more gracefully than brittle, rule-based systems ever could.

Google claims that around 12 percent of replies on the mobile app use Smart Reply, and it’s easy to see why. Suddenly, agreeing to events or meetups became as simple as a press of a button. Forcing myself to reply meant it was easier to say yes than no.

Sometimes, it didn’t quite offer up what I expected. I received a lovely e-mail from a PR agent that I’d worked with before, who wanted to know if I was interested in learning more about a product launch. The story didn’t seem right for Inverse, but the only non-affirmative response Google offered was ‘Nope, thanks!’ Now, perhaps the app is onto something and I could do with being more forceful in my rejections, but it struck me as incredibly dismissive and not something I would ever say.

Other times, Google would push me to get sassy with my contacts. One set of suggested responses to a press release were ‘Very cool!’, ‘Thanks!’ and ‘Congratulations!’, the latter of which probably would have come off as slightly sarcastic. These moments were few and far between, though, and on the whole the options felt like Google was really trying to mimic me.

But as I grew accustomed to depending on the responses, it started to feel a bit unsettling. I felt a lot like FBI agent Dale Cooper in Twin Peaks: The Return, who’s able to make his way through life by giving short simple answers, often merely repeating back a phrase the other person just uttered.

Kyle MacLachlan as Dale Cooper.
Kyle MacLachlan as Dale Cooper.

Muck like how David Lynch’s show gives the viewer a rather depressing insight into how people will follow along with very simple interactions, I felt like I was almost cheating at life. When fleshed-out responses came back with more information as recipients failed to clock that a computer just responded, it felt like I was lying to people. Don’t you see? I wanted to scream. I’m not putting any effort into this!

“I didn’t even realize, because you say ‘brilliant’ all the time,” one friend told me, after I managed to sort out weekend plans through the app.

After my experiment ended, I chose not to continue using Google’s iOS app, as the built-in Apple version makes it easy to manage email from multiple accounts in a single inbox. But I do miss my automated replies, and it encouraged me to fire off replies more often, no matter how low-effort they are.

And perhaps that’s the lesson here. Artificial intelligence can improve our daily routines by encouraging good behaviors. To the people that received suggested responses, I’d taken the time to reply and they appreciated that. Google might have taken my voice and played it back to people without anyone noticing, but if it means people speak to each other more often, maybe that’s not such a bad thing.

https://www.wareable.com/running/studio-treadmill-running-apple-watch-5254

Studio wants to help you train on the treadmill with an Apple Watch

Gamify your tedious runs from the wrist
Studio steps up Apple Watch treadmill runs

A new running app is aiming to help treadmill users by tracking progress and pulling in biometric data from an Apple Watch.

Studio, which is looking to emphasise the social element of running through in-app classes, allows users to see the progress and rankings of likeminded runners. The smartwatch, meanwhile, will provide info on heart rate and speed in order to add more depth to the leaderboards.

Read this: Best Apple Watch running apps

The process is also gamified through the concept of Fitcoins, the app’s currency. These are awarded depending on your mixture of distance, time elapsed and heart rate, with the premise to essentially reward the more work you put into a run. So, for example, if you had a high average heart rate and ran for a long distance, you’d receive more Fitcoins to personalise the Studio.

For those who own a treadmill and want to take part in the classes, there are several levels of difficulty to choose from — beginner, intermediate and advance — as well as varying durations, with users able to run for 20, 30 or 45 minutes. You can take part in more focused sessions, too, such as a zero to 5K course, and decide whether the classes should emphasise a certain genre of music.

An Apple Watch isn’t required for the Studio app, but it does certainly appear to help users get the most out of the social aspects involved. It’s also one of the clearer examples we’ve seen of an app looking specifically to hone in on the smarts of the Watch.

Whether this expands into deeper areas remains to be seen, but Studio certainly won’t be the last to try and take advantage of the device’s capabilities – even though, as is often the case, tracking distance from the wrist can be a tricky task.

Those looking to get involved can sign up for a fortnight trial of the app, though this does rise to $15 a month or $100 for an entire year if you want to continue tapping into the app.

http://www.zdnet.com/article/scientists-built-this-raspberry-pi-powered-3d-printed-robot-lab-to-study-flies/

Scientists built this Raspberry Pi-powered, 3D-printed robot-lab to study flies

Researchers have released designs and software for neuroscientists to 3D print their own Raspberry Pi-powered fly lab.

fruitfly.jpgjaneff, Getty Images/iStockphoto

Researchers have created a Raspberry Pi-powered robotic lab that detects and profiles the behaviour of thousands of fruit-flies in real-time.

The researchers, from Imperial College London, built the mini Pi-powered robotics lab to help scale up analyses of fruit flies, which have become popular proxy for scientists to study human genes and the wiring of the brain. The researchers call the lab an ethoscope, an open-source hardware and software platform for “ethomics“, which uses machine vision to study animal behaviour.

And while computer-assisted analysis promises to revolutionize research techniques for Drosophila (fruit fly) neuroscientists, the researchers argue its potential is constrained by custom hardware, which adds cost and often aren’t scalable.

The Raspberry Pi-based ethoscope offers scientists a modular design that can be built with 3D-printed components or even LEGO bricks at a cost of €100 per ethoscope.

“One of the most successful features of DAMs that we aimed to imitate is the ability to run dozens of experiments simultaneously, gathering data in real-time from thousands of flies at once, using a device that follows a ‘plug-and-play’ approach,” they write.

Besides the Raspberry Pi, the ethoscope requires a HD camera and a 3D-printed box-type frame with various compartments. They’ve also provided designs and instructions for creating the frame with LEGO or folded cardboard.

The Raspberry Pi and camera are located at the top for observing and recording a “behavioral arena” deck at the bottom, which is illuminated by an infrared LED light. They offer eight behavioral arena 3D print designs for different types of studies, such as sleep analysis, decision making, and feeding.

Each ethoscope is powered via USB and can be controlled from a PC through a web interface. The Raspberry Pi offloads data to PCs over wifi for analysis, which also ensures experiments aren’t constrained by storage capacity on the Raspberry Pi.

On the software side, they used a supervised machine learning algorithm to develop a tracking module and a real-time behavioral annotator that automatically labels different states of activity, such as walking, making micro-movements like eating or laying an egg, and not moving.

“It may appear surprising, but fruit flies are smart animals and they can do pretty much everything humans do: flies know how to look for food, shelter and mating partners; they learn to avoid predators and aggressive mates; they communicate, court and engage in social lives,” said Quentin Geissmann, the PhD student at Imperial’s Department of Life Sciences who led the study.

https://thenextweb.com/artificial-intelligence/2017/10/20/googles-deepmind-achieves-machine-learning-breakthroughs-at-a-terrifying-pace/

Google’s DeepMind achieves machine learning breakthroughs at a terrifying pace

Google’s DeepMind achieves machine learning breakthroughs at a terrifying pace

It’s time to add “AI research” to the list of things that machines can do better than humans. Google’s Alpha Go, the computer that beat the world’s greatest human go player, just lost to a version of itself that’s never had a single human lesson.

Google is making progress in the field of machine learning at a startling rate. The company’s AutoML recently dropped jaws with its ability to self-replicate, and DeepMind is now able to teach itself better than the humans who created it can.

DeepMind is the machine behind both versions of Alpha Go, with the latest evolution dubbed Alpha Go Zero — which sounds like the prequel to a manga.

The original Alpha Go is a monster of technology with 40 AI processors, and the data from thousands of go matches built into it. From the ground up, it was “born” with a pretty decent understanding of the game. Over time, and under the direction of humans, it began to learn the game and its nuanced strategies.

Eventually Alpha Go became so advanced, it was able to defeat the world’s top human player and establish AI’s supremacy in a game so difficult it makes chess look like checkers.

In short, Alpha Go is pretty legit.

The brilliant minds at Google decided being the best wasn’t good enough; they “evolved” Alpha Go into “Alpha Go Zero.” It was able to defeat Alpha Go at its own game only 40 days later.

Credit: Google

Let that sink in.

Now here’s the shocking part: Alpha Go Zero has only four AI processors and the only data it was given was the rules of the game. Nobody taught it how to play or fed it thousands of matches to study.

According to Google’s blog:

This technique is more powerful than previous versions of AlphaGo because it is no longer constrained by the limits of human knowledge. Instead, it is able to learn tabula rasa from the strongest player in the world: AlphaGo itself.

The AI plays Go against itself, improving with every match. After millions of matches its strategy is, as far as humans are concerned, infallible. Both versions of the machine play the game at a level that’s considered superhuman.

The speed with which Google’s AutoML and DeepMind have taken “self learning” to the next level is wonderful and terrifying at the same time.

In order for AI to fullfill its promise to humanity it has to ease our burdens and free our minds to solve uniquely human problems. A version of DeepMind that – in a little over a month – can teach itself to outperform a previous iteration is the realization of that ideal.

It’s time we took Sundar Pichai’s assertion that Google is an AI company seriously.

https://www.medicalnewstoday.com/articles/319817.php

Is gene editing ethical?

Gene editing illustration
Will gene editing become a part of everyday medicine?
If you bring up the subject of gene editing, the debate is sure to become heated. But are we slowly warming to the idea of using gene editing to cure genetic diseases, or even create “designer babies?”

Gene editing holds the key to preventing or treating debilitating genetic diseases, giving hope to millions of people around the world. Yet the same technology could unlock the path to designing our future children, enhancing their genome by selecting desirable traits such as height, eye color, and intelligence.

While gene editing has been used in laboratory experiments on individual cells and in animal studies for decades, 2015 saw the first report of modified human embryos.

The number of published studies now stands at eight, with the latest research having investigated how a certain gene affects development in the early embryo and how to fix a genetic defect that causes a blood disorder.

The fact that gene editing is possible in human embryos has opened a Pandora’s box of ethical issues.

So, who is in favor of gene editing? Do geneticists feel differently about this issue? And are we likely to see the technology in mainstream medicine any time soon?

What is gene editing?

Gene editing is the modification of DNA sequences in living cells. What that means in reality is that researchers can either add mutations or substitute genes in cells or organisms.

While this concept is not new, a real breakthrough came 5 years ago when several scientists saw the potential of a system called CRISPR/Cas9 to edit the human genome.

CRISPR/Cas9 allows us to target specific locations in the genome with much more precision than previous techniques. This process allows a faulty gene to be replaced with a non-faulty copy, making this technology attractive to those looking to cure genetic diseases.

The technology is not foolproof, however. Scientists have been modifying genes for decades, but there are always trade-offs. We have yet to develop a technique that works 100 percent and doesn’t lead to unwanted and uncontrollable mutations in other locations in the genome.

In a laboratory experiment, these so-called off-target effects are not the end of the world. But when it comes to gene editing in humans, this is a major stumbling block.

Here, the ethical debate around gene editing really gets off the ground.

When gene editing is used in embryos — or earlier, on the sperm or egg of carriers of genetic mutations — it is called germline gene editing. The big issue here is that it affects both the individual receiving the treatment and their future children.

This is a potential game-changer as it implies that we may be able to change the genetic makeup of entire generations on a permanent basis.

Who is in favor of gene editing?

Dietram Scheufele — a professor of science communication at the University of Wisconsin-Madison — and colleagues surveyed 1,600 members of the general public about their attitudes toward gene editing. The results revealed that 65 percent of respondents thought that germline editing was acceptable for therapeutic purposes.

When it came to enhancement, only 26 percent said that it was acceptable and 51 percent said that it was unacceptable. Interestingly, attitudes were linked to religious beliefs and the person’s level of knowledge of gene editing.

“Among those reporting low religious guidance,” explains Prof. Scheufele, “a large majority (75 percent) express at least some support for treatment applications, and a substantial proportion (45 percent) do so for enhancement applications.”

He adds, “By contrast, for those reporting a relatively high level of religious guidance in their daily lives, corresponding levels of support are markedly lower (50 percent express support for treatment; 28 percent express support for enhancement).”

Among individuals with high levels of technical understanding of the process of gene editing, 76 percent showed at least some support of therapeutic gene editing, while 41 percent showed support for enhancement.

But how do the views of the general public align with those of genetics professionals? Well, Alyssa Armsby and professor of genetics Kelly E. Ormond — both of whom are from Stanford University in California — surveyed 500 members of 10 genetics societies across the globe to find out.

What do professionals think?

Armsby says that “there is a need for an ongoing international conversation about genome editing, but very little data on how people trained in genetics view the technology. As the ones who do the research and work with patients and families, they’re an important group of stakeholders.”

The results were presented yesterday at the American Society for Human Genetics (ASHG) annual conference, held in Orlando, FL.

In total, 31.9 percent of respondents were in favor of research into germline editing using viable embryos. This sentiment was more particularly pronounced in respondents under the age of 40, those with fewer than 10 years experience, and those who classed themselves as less religious.

The survey results also revealed that 77.8 percent of respondents supported the hypothetical use of germline gene editing for therapeutic purposes. For conditions arising during childhood or adolescence, 73.5 percent were in favor of using the technology, while 78.2 percent said that they supported germline editing in cases where a disease would be fatal in childhood.

On the subject of using gene editing for the purpose of enhancement, just 8.6 percent of genetics professionals spoke out in favor.

“I was most surprised, personally,” Prof. Ormond told Medical News Today, “by the fact that nearly [a third] of our study respondents were supportive of starting clinical research on germline genome editing already (doing the research and attempting a pregnancy without intent to move forward to a liveborn baby).”

This finding is in stark contrast to a policy statement that the ASHG published earlier this year, she added.

Professional organizations urge caution

According to the statement — of which Prof. Ormand is one of the lead authors — germline gene editing throws up a list of ethical issues that need to be considered.

The possibility of introducing unwanted mutations or DNA damage is a definite risk, and unwanted side effects cannot be predicted or controlled at the moment.

The authors further explain:

Eugenics refers to both the selection of positive traits (positive eugenics) and the removal of diseases or traits viewed negatively (negative eugenics). Eugenics in either form is concerning because it could be used to reinforce prejudice and narrow definitions of normalcy in our societies.”

“This is particularly true when there is the potential for ‘enhancement’ that goes beyond the treatment of medical disorders,” they add.

While prenatal testing already allows parents to choose to abort fetuses carrying certain disease traits in many places across the globe, gene editing could create an expectation that parents should actively select the best traits for their children.

The authors take it even further by speculating how this may affect society as a whole. “Unequal access and cultural differences affecting uptake,” they say, “could create large differences in the relative incidence of a given condition by region, ethnic group, or socioeconomic status.”

“Genetic disease, once a universal common denominator, could instead become an artefact of class, geographic location, and culture,” they caution.

Therefore, the ASHG conclude that at present, it is unethical to perform germline gene editing that would lead to the birth of an individual. But research into the safety and efficacy of gene editing techniques, as well as into the effects of gene editing, should continue, providing such research adheres to local laws and policies.

In Europe, this is echoed by a panel of experts who urge the formation of a European Steering Committee to “assess the potential benefits and drawbacks of genome editing.”

They stress the need “to be proactive to prevent this technology from being hijacked by those with extremist views and to avoid misleading public expectation with overinflated promises.”

But is the public’s perception really so different from that of researchers on the frontline of scientific discovery?

Working together to safeguard the future

Prof. Ormond told MNT that “a lot of things are similar — both groups feel that some forms of gene editing are acceptable, and they seem to differentiate based on treating medical conditions as compared to treatments that would be ‘enhancements,’ as well as based on medical severity.”

“I do think there are some gaps […],” she continued, “but clearly knowledge and levels of religiosity impact the public’s views. We need to educate both professionals and the public so that they have a realistic sense of what gene editing can and cannot do. Measuring attitudes is difficult to do when people don’t understand a technology.”

While advances such as CRISPR/Cas9 may have brought the possibility of gene editing one step closer, many diseases and traits are underpinned by complex genetic interactions. Even a seemingly simple trait such as eye color is governed by a collection of different genes.

To decide what role gene editing will play in our future, scientific and medical professionals must work hand-in-hand with members of the general public. As the authors of the ASHG position statement conclude:

Ultimately, these debates and engagements will inform the frameworks to enable ethical uses of the technology while prohibiting unethical ones.”

http://gadgets.ndtv.com/mobiles/news/samsung-linux-on-galaxy-dex-mobile-to-pc-support-1765071

Samsung’s DeX Mobile-to-PC Transition Tool Can Soon Run Linux Desktops

HIGHLIGHTS

  • Linux on Galaxy still in the works
  • Developers can sign-up to get early access
  • DeX Mobile-to-PC transition tool was announced earlier this year

Samsung has announced that it is working on an app that will let developers run a Linux-based desktop using their Galaxy smartphones, thanks to the Samsung DeX platform. The move is clearly targeted at developers who want to run different Linux-based distributions but have limitations of PC only.

“Installed as an app, Linux on Galaxy will give smartphones the capability to run multiple operating systems, enabling developers to work with their preferred Linux-based distributions on their mobile devices,” explains Samsung. One of the biggest advantage of such a setup will be that developers can simply switch to the app to run any program they need to in a Linux OS which is generally not available on the regular smartphone OS.

Samsung confirms that the Linux on Galaxy is still “a work in progress” but developers and interested users can sign-up at seap.samsung.com/linux-on-galaxy for getting an early notification of availability.

“Linux on Galaxy is made even more powerful because it is DeX-enabled, giving developers the ability to create content on a large screen, powered only by their mobile device. This represents a significant step forward for software developers, who can now set up a fully functional development environment with all the advantages of a desktop setting that is accessible anytime, anywhere,” adds Samsung.

Unveiled earlier this year, Samsung DeX tool is currently compatible with the company’s top-of-the-line smartphones including Galaxy S8Galaxy S8+, and Galaxy Note 8 which offer users an Android-based desktop-like experience. To get started, users have to plug the Samsung smartphone into the DeX Station, which will connect the smartphone to an HDMI compatible monitor as well as connect to any Bluetooth-enabled, USB or RF-type keyboard and mouse. With the new tool, Galaxy smartphone user can access apps, browse the Internet, send messages, and more directly from the phone on a larger display. Samsung DeX supports keyboard and mouse gestures. The service so far was limited to Windows 10 but the new app tries to bring Linux as well.