https://www.raspberrypi.org/blog/pi-hole-raspberry-pi/

BLOCK ADS AT HOME USING PI-HOLE AND A RASPBERRY PI

Today’s blog post comes from Jacob Salmela, creator of Pi-hole, a network-wide ad blocker used by Raspberry Pi enthusiasts to block advertisements on all devices connected to their home network.

What is Pi-hole?

Pi-hole is a network-wide ad blocker. Instead of installing adblockers on every device and every browser, you can install Pi-hole once on your network, and it will protect all of your devices. Because it works differently than a browser-based ad-blocker, Pi-hole also block ads in non-traditional places, such as in games and on smart TVs.

I originally made Pi-hole as a replacement for the AdTrap device. I have a background in networking, so I figured I could make something better with some inexpensive hardware like the Raspberry Pi. I spent two summers working on the project and made the code open source. Four years later, we have several developers working on Pi-hole, and we have grown into a very large project with a vibrant community.

How does it work?

Pi-hole functions as an internal, private DNS server for your network. For many home users, this service is already running on your router, but your router doesn’t know where advertisements are — but Pi-hole does. Pi-hole will intercept any queries for known ad-serving domains and deny them access, so ads won’t be downloaded.

Using Pi-hole on a Raspberry Pi

Users configure their router’s DHCP options to force clients to use Pi-hole as their DNS server.

This means websites will load normally but without advertisements; since ads are never downloaded, sites will load faster. Pi-hole also caches these queries, so responsiveness to commonly visited websites can also be noticed.

Pi-hole and Raspberry Pi

The Pi-hole software has very low resource requirements and can even run on a Raspberry Pi Zero W. And despite its name, you can also install Pi-hole on several other Linux distributions. Many users install it on a VM or in a container and let it provide services that way. But since Pi-hole’s resource requirements are so low, many users have found it to be a good use of their older, lower-powered model Raspberry Pis. Simply install Pi-hole, connect the Pi to your router, and begin blocking ads everywhere.

Using Pi-hole on a Raspberry Pi

The Pi-hole web interface allows users to monitor ad-blocking data, to access the query log, and more.

You can also pair Pi-hole with a VPN to get ad blocking via a cellular connection. This will help you with bandwidth limits and data costs, because your phone won’t need to download advertising videos and images.

Install Pi-hole

Pi-hole can be downloaded to your Raspberry Pi via a one-step automated install — just open a terminal window and run the following command:

curl -sSL https://install.pi-hole.net | bash

You can find more information about setting up Pi-hole on your Raspberry Pi on the Pi-hole GitHub repository here.

If you need support with using Pi-hole or want to chat with the Pi-hole community, you can visit their forum here.

If you’d like to support Jacob and the Pi-hole team as they continue to develop the functions of their ad-blocker, you can sign up as a Patreondonate directly, or purchase swag, including the Pi-hole case from Pi Supply.

7:32 Uber Drivers Aren’t Living The American Dream FUSION 459K views 2:07:08 Human Level Artificial Intelligence – Documentary 2018 The All-Seeing Eye Recommended for you 50:40 The Simulation Hypothesis – FULL PROGRAM – HD (Original) Fair Wind Films Recommended for you 10:23 Why Bing Isn’t a Failure (& the Future of the Internet) PolyMatter 1.4M views GoPro- How a Hero is Losing Millions – A Case Study For Entrepreneurs Valuetainment Recommended for you The National Debt Scam Comprehensive Research, Inc. Recommended for you How Science is Taking the Luck out of Gambling – with Adam Kucharski The Royal Institution Recommended for you Why graphene hasn’t taken over the world…yet Verge Science Recommended for you Shenzhen: The Silicon Valley of Hardware (Full Documentary) | Future Cities | WIRED WIRED UK Recommended for you The Grand Theory of Amazon PolyMatter 535K views BBC Horizon 37-11 The Mystery of the Miami Circle (25 Jan 2001) FR Hamilton Recommended for you Bruce Leybourne: Geometry of Earth’s Endogenous Electrical Energy — Geophysical Evidence | EU2016 ThunderboltsProject Recommended for you In 5 Years Robots Will Take Your Job! What Then? Thoughty2 Recommended for you NEW THINKING S1 • E2 The Man Who COULD Have Been Bill Gates [Gary Kildall] ColdFusion 346K views Elon Musk’s Basic Economics Wendover Productions 2.2M views The Death of Malls? PolyMatter 706K views Uber vs Lyft – Which Pays Better? – Ridesharing Comparison The Infographics Show 313K views How the rich get richer – money in the world economy | DW Documentary DW Documentary 899K views TOKYO: Earth’s Model MEGACITY The Daily Conversation 299K views Apple’s Money Problem (& Why It Won’t Buy Netflix) PolyMatter 2M views The Economics of Uber

https://singularityhub.com/2018/07/24/dna-computing-gets-a-boost-with-this-machine-learning-hack/

DNA Computing Gets a Boost With This Machine Learning Hack

As the master code of life, DNA can do a lot of things. Inheritance. Gene therapyWipe out an entire speciesSolve logic problems. Recognize your sloppy handwriting.

Wait, What?

In a brilliant study published in Nature, a team from Caltech cleverly hacked the properties of DNA, essentially turning it into a molecular artificial neural network.

When challenged with a classic machine learning task—recognize hand-written numbers—the DNA computer can deftly recognize characters from one through nine. The mechanism relies on a particular type of DNA molecule dubbed the “annihilator,” (awesome name!) which selects the winning response from a soup of biochemical reactions.

But it’s not just sloppy handwriting. The study represents a quantum leap in the nascent field of DNA computing, allowing the system to recognize far more complex patterns using the same amount of molecules. With more tweaking, the molecular neural network could form “memories” of past learning, allowing it to perform different tasks—for example, in medical diagnosis.

“Common medical diagnostics detect the presence of a few biomolecules, for example cholesterol or blood glucose,” said study author Kevin Cherry. “Using more sophisticated biomolecular circuits like ours, diagnostic testing could one day include hundreds of biomolecules, with the analysis and response conducted directly in the molecular environment.”

Neural Networks

To senior author Dr. Lulu Qian, the idea of transforming DNA into biocomputers came from nature.

“Before neuron-based brains evolved, complex biomolecular circuits provided individual cells with the ‘intelligent behavior’ required for survival,” Qian wrotein a 2011 academic paper describing the first DNA-based artificial neural network (ANNs) system. By building on the richness of DNA computing, she said, it’s possible to program these molecules with autonomous brain-like behaviors.

Qian was particularly intrigued by the idea of reconstituting ANNs in molecular form. The basis of deep learning, a type of machine learning that’s taken the world by storm, ANNs are learning algorithms inspired by the human brain. Similar to their biological inspirations, ANNs contain layers of “neurons” connected to each other through various strengths—what scientists call “weights.” The weights depend on whether a certain feature is likely present in the pattern it’s trying to recognize.

ANNs are precious because they are particularly flexible: in number recognition, for example, a trained ANN can generalize the gist of a hand-written digit—say, seven—and use that memory to identify new potential “sevens,” even if the handwriting is incredibly crappy.

Qian’s first attempt at making DNA-based ANNs was a cautious success. It could only recognize very simple patterns, but the system had the potential for scaling up to perform more complex computer vision-like tasks.

This is what Qian and Cherry set out to do in the new study.

Test-Tube Computers

Because DNA computers literally live inside test tubes, the first problem is how to transform images into molecules.

The team started by converting each hand-written digit into a 10-by-10 pixel grid. Each pixel is then represented using a short DNA sequence with roughly 30 letters of A, T, C, and G. In this way, each single digit has its own “mix” of DNA molecules, which can be added together into a test tube.

Because the strands float freely within the test tube, rather than aligning nicely into a grid, the team cleverly used the concentration of each strand to signify its location on the original image grid.

Similar to the input image, the computer itself is also made out of DNA. The team first used a standard computer to train an ANNs on the task, resulting in a bunch of “weight matrices,” with each representing a particular number.

“We come up with some pattern we want the network to recognize, and find an appropriate set of of weights” using a normal computer, explained Cherry.

These sets of weights are then translated into specific mixes of DNA strands to build a network for recognizing a specific number.

“The DNA [weight] molecules store the pattern we want to recognize,” essentially acting as the memory of the computer, explained Cherry. For example, one set of molecules could detect the number seven, whereas another set hunts down six.

To begin the calculations, the team adds the DNA molecules representing an input image together with the DNA computer in the same test tube. Then the magic happens: the image molecules react with the “weight” molecules—As binding to Ts, Cs tagging to Gs—eventually resulting in a third bunch of output DNA that glow fluorescent light.

Why does this DNA reaction work as a type of calculation? Remember: if a pixel occurs often within a number, then the DNA strand representing this pixel will be at a higher concentration in the test tube. If the same pixel is also present in the input pattern, then the result is tons of reactions, and correspondingly, lots more glow-in-the-dark output molecules.

In contrast, if the pixel has a low chance of occurring, then the chemical reactions would also languish.

In this way, by tallying up the reactions of every pixel (i.e. every type of DNA strand), the team gets an idea of how similar the input image is to the stored number within the DNA computer. The trick here is a strategy called “winner-takes-all.” A DNA “annihilator” molecule is set free into the test tube, where it tags onto different output molecules and, in the process, turns them into unreactive blobs unable to fluoresce.

“The annihilator quickly eats up all of the competitor molecules until only a single competitor species remains,” explained Cherry. The winner then glows brightly, indicating the neural network’s decision: for example, the input was a six, not a seven.

In a first test, the team chucked roughly 36 handwritten digits—recoded into DNA—with two DNA computers into the same test tube. One computer represented the number six, the other seven. The molecular computer correctly identified all of them. In a digital simulation, the team further showed that the system could classify over 14,000 hand-written sixes and sevens with 98 percent accuracy—even when those digits looked significantly different than the memory of that number.

“Some of the patterns that are visually more challenging to recognize are not necessarily more difficult for DNA circuits,” the team said.

Next, they further souped up the system by giving each number a different output color combination: green and yellow for five, or green and red for nine. This allowed the “smart soup” to simultaneously detect two numbers in a single reaction.

Forging Ahead

The team’s DNA neural net is hardly the first DNA computer, but it’s certainly the most intricate.

Until now, the most popular method is constructing logic gates—AND, OR, NOT, and so on—using a technique called DNA strand displacement. Here, the input DNA binds to a DNA logic gate, pushing off another strand of DNA that can be read as an output.

Although many such gates can be combined into complex circuits, it’s a tedious and inefficient way of building a molecular computer. To match the performance of the team’s DNA ANN, for example, a logic gate-based DNA computer would need more than 23 different gates, making the chemical reactions more prone to errors.

What’s more, the same set of molecular ANNs can be used to compute different image recognition tasks, whereas logic-gate-based DNA computers need a special matrix of molecules for each problem.

Looking ahead, Cherry hopes to one day create a DNA computer that learns on its own, ditching the need for a normal computer to generate the weight matrix. “It’s something that I’m working on,” he said.

“Our system opens up immediate possibilities for embedding learning within autonomous molecular systems, allowing them to ‘adapt their functions on the basis of environmental signals during autonomous operations,’” the team said.

https://wccftech.com/qualcomm-adreno-turbo-takes-on-huawei-gpu-turbo/

Qualcomm Reportedly Preparing Its Adreno Turbo Technology to Take on Huawei’s GPU Turbo

Huawei revealed its new GPU Turbo Technology last month during an event in China. The technology boosts gaming performance of smartphones with lowered power consumption, resulting in higher frame rates. The technology will arrive as a software update to many devices, but most of them like the Honor 10 GT already provided ‘out of the box’ support for it. According to a new report, Qualcomm is now gearing up to launch a similar technology, and it is going to be called Adreno Turbo.

Adreno Turbo Slated for an August 2 Announcement to Take on GPU Turbo

Qualcomm China has hinted an announcement related to mobile gaming on its Weibo page. It teases a new Adreno Turbo technology that could provide a better gaming experience on handsets fueled by select Snapdragon processors. The chipset manufacturer might also announce a new Snapdragon SoC at an event in China tomorrow, but it is unconfirmed at this point.

qualcomm-5g-modules-2RELATEDQualcomm Braces for 5G as It Launches Its mmWave & Sub-6GHz Modules Specifically Designed for Phones

According to MySmartPrice, the Adreno Turbo technology would be a direct competitor to Huawei’s GPU Turbo technology. According to claims, Huawei’s GPU Turbo technology improves graphics processing efficiency by nearly 60 percent and reduces power consumption by 30 percent during gaming. The GPU Turbo technology also enables AR and VR applications to run smoothly. Currently, Huawei’s GPU Turbo technology is not compatible with all smartphone games.

When the GPU Turbo Technology was announced, Huawei stated that it will support games like PUBG Mobile and Mobile Legends: Bang Bang. The company will expand support for more games in the future. In addition to boosting frame rates, the GPU Turbo technology enhances the gaming experience by bringing HDR picture quality as well, and it is highly possible that Adreno Turbo mimics some of the features of GPU Turbo.

If Qualcomm does indeed announce it on August 2, Android OEMs may issue software updates to enable the feature on compatible handsets. After the official announcement regarding the rollout of Qualcomm’s Adreno Turbo technology, we may see an Android OEM announce the first smartphone to ship with the new technology out of the box. Our guess is that companies like Razer, Xiaomi and ASUS will be the first ones to release a software update since these are the only companies who have launched gaming smartphones till now.

https://www.livescience.com/63153-brain-color-distortion-maps.html

How a Map of Your Brain Can Trick Your Brain

All three of these images were generated using the same data. But they don’t tell the same story.

Credit: Chris Holdgraf

Color maps in scientific papers are too colorful, according to data scientists. These figures, they say, can be so vivid that they trick people’s brains into thinking scientific results are more dramatic then they really are.

The colorful figures, illustrations meant to visually communicate data, might be the most compelling thing to look at in a paper full of dense text and tables of date. These images — maps of blood flow in the brain, humidity levels in Great Britain or an ant’s favorite place to munch leaves — just pop out.

That’s a problem.

Here’s one example of a color map of the human brain provided by Chris Holdgraf, a data scientist at the University of California, Berkeley:

Images like this are attractive, Holdgraf told Live Science. But they’re also a problem, because they can trick your brain. [3D Images: Exploring the Human Brain]

The idea behind a color map is simple. Sometimes, you have multiple kinds of data that you’re trying to represent in a single figure. When you have just two kinds of data, that problem is easy to solve. Just create an x-axis and a y-axis, like so:

If you plot one of the two kinds of data (let’s call it “time”) along the x-axis and the other kind of data (let’s call it “height of rocket”) along the y-axis, you can just put a lot of points on the graph to easily, clearly represent the information. As the rocket climbs over time, the points move higher up the graph.

But sometimes, you have three kinds of information to convey in a graph. A brain scan, for example, might give you a map of a slice of the brain — that’s both your x-axis for horizontal position and y-axis for vertical position — with information about how much blood is flowing through each point in that slice. There’s no room for a 3D z-axis on a flat piece of paper, so researchers typically use color to represent that third type of data. Red might mean “lots of blood flow,” and blue might mean “less blood flow.” It’s a fairly easy kind of visualization to make using standard scientific software.

A typical figure from a neuroscience paper uses color to represent blood flow changes to different parts of the brain in different circumstances.
A typical figure from a neuroscience paper uses color to represent blood flow changes to different parts of the brain in different circumstances.

Credit: NIMH, Public Domain

The problem, Holdgraf said, is that human brains don’t perceive color as effectively as they perceive positions in space. In a 2015 talk, UC Berkeley data scientists Nathaniel Smith and Stéfan van der Walt explained the problem in detail: If two dots are an inch apart, our brains are usually pretty good at accurately perceiving the distance between the two, no matter where they are in a visualization. So, figures like that climbing rocket graph are pretty easy to read. But color is more complicated. In a rainbow, a shade of orange might be as far from red as it is from yellow, but our brains might perceive the hue as much redder or much more yellow than it really is.

“Your brain perceives color in nonlinear — kind of wacky — ways,” Holdgraf said. “If you’re not careful about the color you choose, then a step from 0 to 0.5 might be perceived as actually to 0.3. And then that second step from 0.5 to 1 might actually be perceived as like 0.8.”

That’s a problem, Holdgraf said, when you’re using color to represent relationships among precisely collected scientific data points. A visualization might make a discovery look more dramatic than it really is or make small effects look very large.

“I don’t think this is something anyone has done with any kind of bad intent,” he said.

For the most part, he said, people are just using default color sets that come along with scientific software.

But Holdgraf, along with Smith and van der Walt, said that scientists need to shift to color palettes carefully selected to avoid tripping any “perceptual deltas” in the human brain — places where visual science says our color perception is uneven. Such color palettes, he said, are less dramatic-looking. They don’t “pop.” But for most people, they’ll convey a more accurate picture of what data really says.

Chris Holdgraf@choldgraf

`makeitpop` takes these distortions and (roughly) applies them *to the data itself*. This means that you can visualize what Jet is doing to your perception, but with a nice linear colormap like viridis

To illustrate the point, Holdgraf wrote a short bit of software called “makeitpop” that can reveal how much perceptual deltas distort data visualizations. In the tweet above, the image on the left turns data into color using “viridus,” a color pallette that avoids perceptual deltas. The one in the middle is made using Jet, a common color pallette that, due to perceptual deltas, can make data look more dramatic than they really are. The image on the right is the result of using makeitpop on the viridus image, highlighting areas that would get warped using Jet.

He said he hopes the example will help get the word out to scientists about perceptual deltas and how to avoid them. However, he added that it will never be possible to do this perfectly, because not everyone perceives color in exactly the same way.

Holdgraf also said that while he does think this sort of distorted color map is a serious problem, he doesn’t think it’s leading scientists to false conclusions — because no one bases their interpretation of a paper purely on a color map.

“It’s the icing on the cake [of a paper],” he said.

Still, he said, it’s an issue of trying to be as honest and straightforward as possible in scientific research. If scientists want to be as precise and accurate as possible, he said, they shouldn’t be using visualizations that can distort reality.

Originally published on Live Science.

https://www.wired.com/story/google-glass-is-backnow-with-artificial-intelligence/

GOOGLE GLASS IS BACK–NOW WITH ARTIFICIAL INTELLIGENCE

Google stopped selling the consumer version of Glass, shown here, last year, amid privacy concerns.
CHRIS WILLSON/ALAMY

Google Glass lives—and it’s getting smarter.

On Tuesday, Israeli software company Plataine demonstrated a new app for the face-mounted gadget that understands spoken language and offers spoken responses. Plataine’s app is aimed at manufacturing workers. Think of an Amazon Alexa for the factory floor.

The app points to a future where Glass is enhanced with artificial intelligence, making it more functional and easy to use. Plataine, whose clients include GE, Boeing, and Airbus, is also working to add image-recognition capabilities to its app.

The Israeli company showed off its Glass app at a Google conference in San Francisco promoting the company’s cloud computing business. It was built using AI services provided by Google’s cloud division, and with support from the company. Google is betting that charging other companies to access AI technology developed for its own use can help its cloud business draw customers away from rivals Amazon and Microsoft.

Jennifer Bennett, technical director to Google Cloud’s CTO, said adding Google’s cloud services to Glass could help make it a revolutionary tool for workers in situations where a laptop or smartphone would be too awkward.

“Many of you probably remember Google Glass from the consumer days—it’s baaack,” Bennett said, earning warm laughter, before introducing Plataine’s project. “Glass has become a really interesting technology for the enterprise.”

The session came roughly one year after Google abandoned its attempt to sell consumers on Glass and its eye-level camera and display, which proved controversial due to privacy concerns. Instead, Google relaunched the gadget as a tool for businesses called Google Glass Enterprise Edition. Pilot projects have involved Boeing workers using Glass on helicopter production lines, and doctors wearing it in the examining room.

Anat Karni, product lead at Plataine, slid on a black version of Glass Tuesday to demonstrate the app. She showed how the app could tell a worker clocking in for the day about production issues that require urgent attention, and show useful information for resolving problems via Glass’s display.

A worker can also talk to Plataine’s app to get help. Karni showed how a worker walking into a store room could say “Help me select materials.” The app would respond, orally and on the devices display, with which materials are needed and where they could be found. A worker’s actions could be instantly visible to factory bosses, synced into the software Plataine already provides customers such as Airbus to track production operations.

Plataine built its app by plugging Google’s voice-interface service, Dialogflow, into a chatbot-like assistant it had already built. It got support from Google, and also software contractor and Google partner Nagarro. Karni credits Google’s technology with an impressive ability to understand how variations in phrasing, or use of terms such as “yesterday” that can trip up chatbots, translates into a worker’s tasks and needs. “It’s so natural,” she says.

Karni told WIRED that her team is now working with Google Cloud’s AutoML service to add image-recognition capabilities to the app, so it can read barcodes and recognize tools, for example. AutoML, which emerged from Google’s AI research lab, automates some of the work of training a machine learning model, and has become a flagship of Google’s cloud strategy. The company hopes corporate cloud services will become a major source of revenue, arguing that Google’s expertise in machine learning and computing infrastructure can help other businesses. Diane Greene, the division’s leader, said last summer that she hoped to catch up with Amazon, far and away the market leader, by 2022.

Gillian Hayes, a professor who works on human-computer interaction at University of California Irvine, says the Plataine project and plugging Google’s AI services into Glass play to the strengths of the controversial hardware. She previously tested the consumer version of the app as a way to help autistic people navigate social situations.

“Spaces like manufacturing floors, where there’s no social norm saying it’s not OK to use this, are the spaces I think it will do really well,” Hayes says. Improvements to voice interfaces and image recognition since Glass first appeared—and disappeared—could help give the device a second wind. “Image and voice recognition technology getting better will make wearable devices more functional,” Hayes says.

https://www.extremetech.com/extreme/274110-study-suggests-crispr-gene-editing-could-have-unanticipated-side-effects

CRISPR Gene Editing May Have Unanticipated Side Effects

The CRISPR/Cas9 system, typically abbreviated to just CRISPR, has been used to edit genomes with increasing frequency over the past few years. The name refers to specialized stretches of DNA that we can cut using an enzyme (Cas9) that functions like a molecular pair of scissors. The enzyme can be positioned to cut fairly precisely, but the authors of a recent study were concerned about the follow-up work done to determine whether Cas9-driven cutting and repair could be causing damage our previous measurement efforts weren’t detecting. They’ve found some evidence they were.

The new research shows that edits or changes in DNA can be introduced by the CRISPR/Cas9 edits at a farther distance from the target location to be edited than was previously known, and that standard DNA tests do not normally detect this damage. Previous efforts to locate damage from CRISPR edits was conducted relatively close to the original edit and did not find any signs of harm. In some cases, the changes introduced were fairly large, with both deletions and insertions, possibly leading to DNA being either switched on or off at inappropriate times as a result of the edits.

And these aren’t the only warning signs being raised about CRISPR: Two studiespublished earlier this summer also found that editing cells with CRISPR/Cas9 could increase the chance that the cells being altered to treat disease could become cancerous or trigger the development of cancer in other cells.

CAS9

CRISPR-Cas9 as a Molecular Tool Introduces Targeted Double-Strand DNA Breaks. Image credit: Wikipedia

The abstract of this new study notes:

Here we report significant on-target mutagenesis, such as large deletions and more complex genomic rearrangements at the targeted sites in mouse embryonic stem cells, mouse hematopoietic progenitors and a human differentiated cell line. Using long-read sequencing and long-range PCR genotyping, we show that DNA breaks introduced by single-guide RNA/Cas9 frequently resolved into deletions extending over many kilobases. Furthermore, lesions distal to the cut site and crossover events were identified. The observed genomic damage in mitotically active cells caused by CRISPR–Cas9 editing may have pathogenic consequences.

This might sound like a serious challenge to CRISPR and even a fatal blow to the technique, but there are some caveats to keep in mind. First, CRISPR can be used to make a variety of edits to DNA and it can accomplish those edits using multiple techniques. Even if it turns out that CRISPR/Cas9 has a problem, a different nuclease, Cpf1, has been found, tested, and is already known to have some advantages over Cas9.

There are also meaningful differences in the types of DNA edits being performed with CRISPR. It’s entirely possible we’ll eventually discover certain techniques that rely on particular enzymes work better than others for various types of DNA editing. In fact, we see precisely this type of progression in other contexts in medicine. While this obviously depends on both the medications and conditions in question, many of our disease and disorder treatments are better today because they avoid side effects and outcomes common to earlier protocols. Even surgical techniques have evolved similarly, with various forms of laparoscopic surgery now available where more invasive surgery was once required to treat the same condition.

In short: There’s some additional evidence that certain types of CRISPR editing can create side effects in places we hadn’t previously detected them. We have existing techniques for noting when this type of change occurs (so we don’t have to invent new tests to locate the damage). The techniques and approaches that have produced this damage are not the only ones used for CRISPR-style genome editing. These findings are a good reason for scientists to pay attention to their data, but they don’t invalidate the overall approach.

 

https://electrek.co/2018/07/25/nissan-all-electric-camper-e-nv200-van/

We are starting to see a few companies looking to enter the camper and motorhome segments with all-electric options.

Nissan is now testing the waters by unveiling an all-electric camper based on the e-NV200 van to be released in Spain.

Motorhomes and campers are often associated with freedom. The idea that you can take your entire home on the road and explore the world is extremely appealing to many.

Electric campers would have the same appeal, but they could also push it to a whole new level.

Those vehicles are currently gas-guzzlers and all-electric versions could significantly reduce the cost of operation and the pollution from the segment.

To a certain degree, it could even elevate the level of freedom. With enough range, you could technically drive most of the day and park at a camping spot with service to charge overnight and then get back on the road the next day.

Several companies are starting to tentatively enter the space.

A German company recently unveiled a full-size electric motorhome prototype and Winnebago launched an all-electric RV platform. But neither currently have a good enough powertrain to make those vehicles viable electric motorhomes yet.

Tesla fans are dreaming about one with a Tesla electric motorhome concept based on Tesla Semi, but Nissan is now making it real with a modified version of the new e-NV200.

The vehicle was unveiled in May, but it went under the radar because it was only released in Spain:

Nissan unveiled two versions, one based on the NV300 van and another based on the e-NV200.

The e-NV200 recently got a significant upgrade with a new 40 kWh battery pack for more range, but we are still talking about a range of only 124 miles (~200 km) between charges based on the WLTP combined cycle, which is going to limit the use of the camper.

Nonetheless, it could still make sense depending on how people want to use it.

Francesc Corberó, communication director of Nissan Iberia, commented on the launch:

“The new Nissan Camper range will allow the most adventurous to have a balcony with views of the most incredible places in the world and enjoy the essence of traveling with family or friends,”

Customers can go configure and order the electric camper with custom features at their local Nissan dealership in Spain.

Depending on the success of the program, Nissan could potentially expand it to other markets.

What do you think? Let us know in the comment section below.

https://www.newscientist.com/article/2174978-we-might-only-see-time-because-we-cant-think-in-quantum-physics/

We might only see time because we can’t think in quantum physics

A glass is much easier to smash than to unsmash

A glass is much easier to smash than to unsmash

Sohl/Getty

Predicting the future is easy, if you are a physicist. Break a glass, and you can boldly assert that it will fall into a number of shards, assuming you know the initial conditions. Knowing the past is more difficult – you need to store much more information to piece a pile of broken glass back together.

This “causal asymmetry” makes it easier to determine cause and effect and thus place events in order. But it doesn’t exist in the quantum world, say Mile Gu at Nanyang …

https://www.engadget.com/2018/07/25/waymo-grocery-pickup-phoenix/

Waymo partners with Walmart for grocery pick-up in Phoenix

The autonomous cars will give rides to and from Avis locations, too.
JasonDoiy via Getty Images

Walmart’s latest move into tech is a partnership with Waymo. In Phoenix later this week, the pair will begin a pilot program where customers can order groceries on the retailer’s website, get a ride to and from the store in a Waymo car and then snag a discount on their groceries. More than that, Waymo is teaming up with Avis Budget Group to pick up and drop off customers when they need a rental car.

There are a few other deals in place as well. Local chain AutoNation will begin offering Waymo vehicles for customer loaner cars, too. Google’s self-driving wing also will serve as the house car for Element Hotel in nearby Chandler, dropping “select guests” off at the office and then back to the hotel during frequent stays.

Waymo says that in its tests thus far, it’s found that most of what customers use its cars for are quick errands, trips to the grocery store, a ride to dinner or to the service station. Since autonomous cars are likely going to be prohibitively expensive for awhile, this could be the way that Google builds mainstream acceptance.

After all, we’ve gone from being told not to get into cars with strangers as kids, to relying on Lyft and Uber to get around in relatively no time flat. Ride-hailing is too convenient and there’s definite utility in not having to rely on a friend or a taxi that may or may not show up when ordered. The same could happen with autonomous vehicles once people have good experiences with them, repeatedly.