Waldo Faster

Vitaliy Kaurov

Find Waldo Faster

February 27, 2015 — Vitaliy Kaurov, Technical Communication & Strategy

Martin Handford can spend weeks creating a single Where’s Waldo puzzle hiding a tiny red and white striped character wearing Lennon glasses and a bobble hat among an ocean of cartoon figures that are immersed in amusing activities. Finding Waldo is the puzzle’s objective, so hiding him well, perhaps, is even more challenging. Martin once said, “As I work my way through a picture, I add Wally when I come to what I feel is a good place to hide him.” Aware of this, Ben Blatt fromSlate magazine wondered if it’s possible “to master Where’s Waldo by mapping Handford’s patterns?” Ben devised a simple trick to speed up a Waldo search. In a sense, it’s the same observation that allowed Jon McLoone to write an algorithm that can beat a human in a Rock-Paper-Scissors game. As Jon puts it, “we can rely on the fact that humans are not very good at being random.”

Readers with a sharp eye can see that the set of points in the two images below is the same, so the images are just different visualizations of the same data. The data shows the 68 spots where Waldo hides in the special edition of seven classic Waldo books. Big thanks go to Ben Blatt, who sat for three hours in a Barnes & Noble bookstore with a measuring tape painstakingly constructing this fabulous dataset. Ben’s study had a follow up by Randal Olson from Michigan State University, who suggested a different approach based on the density of Waldo spots and a shortest path algorithm. In this post, I will go over both of these approaches and suggest alternatives—these are the final results:

Waldo plot path

Getting the Data

I start with getting coordinates for Waldo’s hideouts from all seven classic puzzles. Ben Blatt indicated where they are on a single image. After importing the data, you will see a schematic of a dual-page of an open puzzle book with all hideouts marked with red spots. Each spot has text inside labeling the book and the page of the puzzle it belongs to. The text is not important to us and, moreover, needs to be removed in order to obtain the coordinates.

i = Import["https://wolfr.am/3hI4OSh8"]

Waldo's hideout in two-page layouts

With Wolfram Language image processing tools, it’s easy to find the coordinates of the red spots. Some spots overlap, so we need robust segmentation to account for each spot separately. There are different approaches, and the code below is just one of them. I’ve arranged it so you can see the effects, step-by-step, of applications of image processing functions. Binarize is used to reduce complex RGB image data to a simple matrix of 0s and 1s (binary image). ThenColorNegate followed by FillingTransform heal the small cracks inside the spots due to text labels. Erosion with DiskMatrix diminish and separate overlapping spots. Finally,SelectComponents picks only the spots, and filters out the other white pixels by specifying the rough number of pixels in a spot.

Waldo input2

Waldo SelectComponents

Getting spots’ centers in the image coordinate system is done with ComponentMeasurements. We also verify that we have exactly 68 coordinate pairs:

spots = ComponentMeasurements[Last[%], "Centroid"][[All, 2]]; Short[spots]

The precision of our routine is illustrated by placing blue circles at the coordinates we found—they all encompass the hideouts pretty accurately, even the overlapping ones:

Show[i, Graphics[{Blue, Thickness[.005], Circle[#, 15] & /@ spots}]]

The hideout locations are spot on

Note a sizable textbox in the top-left corner, which is always present in the original puzzles too, deforming the data a bit. I will transition from the image coordinate system to the actual book one. A single Waldo puzzle takes a double-page spread measuring 20 by 12.5 inches:

book = spots 20/First[ImageDimensions[i]]

Ben Blatt’s Approach

Quoting Ben: “53 percent of the time Waldo is hiding within one of two 1.5-inch tall bands, one starting three inches from the bottom of the page and another one starting seven inches from the bottom, stretching across the spread…. The probability of any two 1.5-inch bands containing at least 50 percent of all Waldo’s is remarkably slim, less than 0.3 percent. In other words, these findings aren’t a coincidence.” This is a keen observation because it is not obvious from just looking at the scattered dataset. I can illustrate by defining a geometric region equal to a single 1.5-inch strip (note the handy new ImplicitRegion function):

strip[a_] := ImplicitRegion[a < y <= a + 1.5 && 0 < x < 20, {x, y}]

To see which bands contain the most Waldos, I scan the double-page spread vertically in small steps, counting the percentage of points contained in the band. The percentage is shown in the left plot while the corresponding band position is shown in the right one. Clearly there are sharp peaks at 3 and 7 inches, which would amount to almost 50% if two identical bands were placed at the maxima.

scan = Table[{n, 100 Count[RegionMember[strip[n], book], True]/68.}, {n, 0, 11, .2}];

Animation of band scanning

Widening the bands or adding more of them would raise the percentage, but also add more area to cover during the search, so it would increase the search time too. The total percentage from two bands at the maxima amounts to about 47%, while Ben claimed 53%, which I attribute to errors in quite approximate data acquisition:

Total[100 Count[RegionMember[strip[#], book], True]/68. & /@ {3, 7}]

Musing on Randy Olson’s Approach

Randy had a deeper statistical and algorithmic take on the Waldo data. First he visualized the density of points with a smooth function. The higher the density, the greater the probability is of finding Waldo in the surrounding area. The density plot has a quite irregular pattern, very different from Ben’s two simple bands. The irregular pattern is more precise as a description, but harder to remember as a strategy. Then, Randy “decided to approach this problem as a traveling salesman problem: We need to check every possible location that Waldo could be at while taking as little time as possible. That means we need to cover as much ground as possible without any backtracking.”

I can get a similar plot of the density points right away with SmoothDensityHistogram. To that, I’ll add two more things. The first is a shortest tour through all the points (i.e., a traveling salesman problem solution) found with FindShortestTour:

tour = FindShortestTour[book]

It lists the order of consecutive locations to visit so the length of the tour is minimal (the first number is the tour’s length). This “tour” is a closed loop and is different from Randy’s open “path” from A to B. If we could remember the tour exactly, then we could always follow it and find Waldo quickly. But it’s too complex, so to better comprehend its shape, we can smooth it out with a spline curve:

order = book[[Last[tour]]];

Shortest Waldo-finding tour smoothed with a spline curve

If you compare the visualization above with the original plain scattered plot of points, you may get the feeling that there are more points in the scattered plot. This is an optical illusion, which, I suspect, is related to the different dimensionality of these two visualizations. Points along a curve are perceived as a one-dimensional object covering much less area than a scatter plot of points perceived as two-dimensional and covering almost all the area.

Such visualizations help to give a first impression of the data, the overall distribution, and patterns:

  1. Background density is a visual guide to remembering where finding Waldo is more probable.
  2. The line through the points is a guide to the optimal trajectory when you scan the puzzle with your eyes.

But these two must be related somehow, right? And indeed, if we make the probability density background “more precise” or “local and sharp,” it will sort of wrap around our tour (see image below). SmoothDensityHistogram is based on SmoothKernelDistribution, which is pretty easy to grasp. It is based on the formula

Formula for SmoothKernelDistribution

where kernel k(x) can be a function of our choice and h defines effective “sharpness” or “locality” of the resulting density function. You see below that by making our distribution sharper by decreasingh, we localize the density more around the shortest tour. I have also chosen a different kernel “Cosine” function and changed several minor visual settings (for the tour, spline, and density) to show flexible options for tuning the overall appearance to anyone’s liking.

SmoothDensityHistogram[book, {1.5, "Cosine"}, ...

Out[14]

By getting the spline from the tour and widening it, we could get closer to the pattern of the density (the regions of thick-enough spline would overlap where the density is higher). And vice versa, by localizing the density kernel, we can get the density to envelope the spline and the tour. It is quite marvelous to see one mathematical structure mimic another!

Well, while nice looking, this image is not terribly useful unless you have a good visual memory. Let’s think of potential improvements. Currently I have a closed loop for the shortest tour, and this means going once from the left to right page and then returning back. Could I find a one-way shortest path that traverses once from left to right? This is what Randy Olson did with a genetic algorithm, and what I will do with a modified FindShortestTour that artificially sets the distance between the start and end points to zero. Thus, FindShortestTour is forced to draw a link between the start and end and then traverse all the rest of the points. Finally, the fake zero-length start-end link is removed, and I get a disconnected start-to-end point path.

thruway[p_List][k_Integer, m_Integer] := ...

I will also get rid of the single outlier point in the top-left corner and sort points in order of increasing horizontal coordinate. The outlier does not influence much, and even when it’s the solution, it can be easily checked because the area above the textbox is small.

bookS = Rest[Sort[book]]; tourS = thruway[bookS][1, 67]; orderS = bookS[[Last[tourS]]];

Next I will find a shortest path from the bottom-left to bottom-right corners. And again, I will choose different visual options to provide an alternative appearance.

SmoothDensityHistogram[bookS, {1.5, "Cosine"},  Mesh ...

Out[19]

Randy used a similar but different path to devise a less-detailed schematic trajectory and instructions for eye-scanning the puzzle.

The Grand Unification

When you have a dual-page book spread, isn’t it natural to think in terms of pages? People habitually examine pages separately. Looking at the very first density plot I made, one can see that there’s a big density maximum on the left and another one stretched toward the right. Perhaps it’s worth looking at the same analysis done for each page independently. Let’s split our points into two sets belonging to different pages:

pages = GatherBy[bookS, First[#] ...

Out[21]

We walk through the same steps, slightly changing visual details:

porder = #[[thruway[#][1, Length[#]][[2]]]] & /@ pages;

Out[23]

And indeed, a simple pattern emerges. Can you see the following two characters on the left and right pages correspondingly: a slanted capital L or math angle “∠” and Cyrillic “И”?

Row[Style[#, #2, Bold, FontFamily ...

I would suggest the following strategy:

  1. Avoid the very top and very bottom of both pages.
  2. Avoid the top-left corner of the left page.
  3. Start from the left page and walk a capital slanted “L” with your eyes, starting with its bottom and ending up crossing diagonally from the bottom left to top right.
  4. Continue on the right page and walk the capital Cyrillic “И”, starting at the top left and ending at the bottom right via the И-zigzag.
  5. If Waldo is not found, then visit areas in steps 1) and 2).

These two approaches of density and shortest path make a full turn and lead me back to the original simpler band idea. Those who prefer something simpler might notice that in the density plot above, there are two maxima on each page along the diagonals mirroring each other across the vertical page split. There are quite a few points along the diagonals, and we could attempt to build two bands of the following geometry:

W = 2; S = .85; R = 2; diag = ImplicitRegion[(S x - W + R < y || S (20 - x) - W + R < y) && y <= S x + W + R && y <= S (20 - x) + W + R && 0 < x < 20 && 1.5 < y < 11, {x, y}];

I compare this region to the original Slate design using RegionMeasure. We see that my pattern is 6% larger in area, but it also covers 7.4% more of Waldo’s positions than Blatt’s region, and it provides a simpler and shorter one-way scanning of the puzzle instead of walking from left to right twice along two bands.

band = ImplicitRegion[(3 ...

Out[28]

So, concerning the strategy, imagine two squares inscribed in each page and walk their diagonals, scanning bands about 2.8 inches thick. This strategy not only tells you where to look for Waldo first, it tells you where to look if you failed to find him: along the other two diagonals. This can be seen from our split-page density plot above and my final visualization below showing diagonal bands covering all four maxima:

sdhg[d_] := SmoothDensityHistogram[d, MeshShading ...

Out[30]

For the Die-Hard Shortest-Path Buffs

While FindShortestTour can work with large datasets, the way we adaptedFindShortestTour (which by default returns a “loop” path) to a shortest disconnected path between two given points is a bit dodgy. The traveling salesman problem is NP-complete, thus, it is possible that the worst-case running time for any algorithm solving it grows superpolynomially (even exponentially) with the number of points. This is why FindShortestTour uses shortcuts to find an approximate solution. For a sufficiently large number of points, it’s possible forFindShortestTour to not actually give the shortest tour. Because the algorithm takes a shortcut, it’s not even guaranteed to find the zero-distance edge that I artificially constructed. Hence, the algorithm could return an approximate shortest path not passing though the zero-distance edge, and thus not going from the given start to the given end points. Our dataset is small, so it worked fine. But what to do in general for a larger set of points? We could forceFindShortestTour to always work properly by considering all Waldo positions as vertices of aGraph.

In general, FindShortestTour on a set of points in n-dimensional space essentially finds the shortest tour in a CompleteGraph. Now if I want such a path with the additional constraint of start and end points, I can augment the graph with an extra vertex that joins the given start and end points in question. The distance to the extra vertex does not really matter.FindShortestTour will be forced to pass through the section start-extra-end points. This works with any connected graph, not necessarily a complete graph, as shown below:

Augmented FindShortestTour graph

After finding the tour on the augmented graph, I remove the extra vertex and its edges, and so the shortest tour disconnects and goes from the start to end points.

I will build a symmetric WeightedAdjacencyMatrix by computing theEuclideanDistance between every two points. Then I’ll simply build aWeightedAdjacencyGraph based on the matrix and add an extra vertex with fake unit distances. Before I start, here is a side note to show how patterns of data can appear even in such an abstract object as WeightedAdjacencyMatrix. Old and new (sorted) orders of points are visually drastically different, while the matrix in both cases has the same properties and would generate the same graph.

ArrayPlot[Outer[EuclideanDistance, #, #, 1], ColorFunction...

Out[31]

On the left, points are unsorted, and on the right, points are sorted according to the shortest tour. The brighter colors are pairs of most remote points. Now let’s define the whole function that will return and visualize a shortest tour based on a network approach:

shortestTourPath[pts_, s_, t_] := ...

Applying this to the Waldo dataset, we get a beautiful short path visual through a complete graph:

shortestTourPath[orderS, 1, 67]

Short path visual through a complete graph

Out[33]

By the way, the latest Wolfram Language release has the state-of-the-art FindShortestTouralgorithm. To illustrate, I’ll remind you of a famous art form when a drawing is made of single line with no interruption or a sculpture from a single wire piece. The oldest examples I know are simultaneously the largest ones—mysterious Nazca Lines from Peru that could measure hundreds of feet in span. Here is, for instance, an aerial image of a 310-foot-long hummingbird (Nazca Lines cannot be perceived from the ground—they are that large). Adjacent to it is a one-line drawing by Pablo Picasso.

Nazca Lines: aerial image of a 310-foot-hummingbird drawingA single-line drawing by Picasso

Picasso’s art is more complex, but it’s pretty easy to guess that humans will have a harder and harder time drawing a subject as the line gets longer and more intricate. But as it turns out, this is exactly where FindShortestTour excels. Here is a portrait computed byFindShortestTour spanning a single line through more than 30,000 points to redraw the Mona Lisa. The idea is quite simple: an image is reduced to only two colors with ColorQuantize, and then FindShortestTour draws a single line through the coordinates of only dark pixels.

So why isn’t it simply a chaos of lines rather than a recognizable image? Because in a shortest tour, connecting closer neighbors is often less costly. Thus, in the Mona Lisa one-liner, there are no segments spanning across large white spaces.

Concluding Remarks

My gratitude goes to Martin Handford, Ben Blatt, and Randy Olson, who inspired me to explore the wonderful Waldo patterns, and also to many creative people at Wolfram Research who helped me along the way. Both Ben and Randy claimed to verify the efficiency of their strategies by actually solving the puzzles and measuring the time. I do not have the books, but hope to do this in future. I also believe we would need many people for consistent benchmarks, especially accounting for different ways people perceive visually and work through riddles. And most of all, I am certain we should not take this seriously.

We did attest that humans are bad at being random. But that is not a bad thing. One who is good at exhibiting patterns is likely apt at detecting them. I would speculate further and say that perhaps a sense of deviation from randomness is at the root of our intelligence and perception of beauty.

Find Waldo Faster

A ‘breakthrough’ in rechargeable batteries for electronic devices and electric vehicles

February 26, 2015

Researchers from Singapore’s Institute of Bioengineering and Nanotechnology (IBN) of A*STAR and Quebec’s IREQ (Hydro-Québec’s research institute) have synthesized a new material that they say could more than double the energy capacity of lithium-ion batteries, allowing for longer-lasting rechargeable batteries for electric vehicles and mobile devices.

The new material for battery cathodes (the + battery pole) in based on a “lithium orthosilicate-related” compound,  Li2MnSiO4, combining lithium, manganese, silicon and oxygen, which the researchers found superior to conventional phosphate-based cathodes. They report an high initial charging capacity of 335 mAh/g (milliAmpere-hours per gram) in the journal Nano Energy.

“IBN researchers have successfully achieved simultaneous control of the phase purity and nanostructure of Li2MnSiO4 for the first time,” said Professor Jackie Y. Ying, IBN Executive Director. “This novel synthetic approach would allow us to move closer to attaining the ultrahigh theoretical capacity of silicate-based cathodes for battery applications.”

The researchers plan to further enhance their new cathode materials to create high-capacity lithium-ion batteries for commercialization.

Meanwhile, there’s Tesla Motors, which plans to develop the “Tesla home battery, or consumer battery, that will be for use in people’s houses or businesses” — apparently to store excess energy generated by solar cells — Elon Musk revealed in a March 11 investors conference call. “We have the design done and it should start going into production probably in about six months or so.”

Tesla Motors is currently constructing the “Tesla Gigafactory,” the first massive lithium-ion battery manufacturing and reprocessing facility, in Nevada. It could churn out a total of 35 gigawatt-hours of lithium-ion battery packs per year,Transport Evolved reports.

 

http://www.kurzweilai.net/a-breakthrough-in-rechargeable-batteries-for-electronic-devices-and-electric-vehicles

MEET WERE Pi

Some of you may have sniffed this in the wind: there have been some changes at The MagPi, the community Raspberry Pi magazine. The MagPi has been run by volunteers, with no input from the Foundation, for the last three years. Ash Stone, Will Bell, Ian MacAlpine and Aaron Shaw, who formed the core editorial team, approached us a few months ago to ask if we could help with what had become a massive monthly task; especially given that half the team has recently changed jobs or moved overseas.

We had a series of discussions, which have resulted in the relaunch of the MagPi you see today. Over the last few months we’ve been working on moving the magazine in-house here at the Foundation. There’s a lot that’s not changing: The MagPi is still your community magazine; it’s still (and always) going to be available as a free PDF download (CC BY-NC-SA 3.0); it’s still going to be full of content written by you, the community.

We don’t make any money out of doing this. Even if in the future we make physical copies available in shops, we don’t expect to break even on the magazine; but we think that offline resources like this are incredibly important for the community and aid learning, so we’re very happy to be investing in it.

Russell Barnes, who has ten years of experience editing technology magazines, has joined us as Managing Editor, and is heading up the magazine. He’s done an incredible job over the last couple of months, and I’m loving working with him. Russell says:

I’m really excited to be part of The MagPi magazine.

Like all great Raspberry Pi projects, The MagPi was created by a band of enthusiasts that met on the Raspberry Pi forum. They wanted to make a magazine for fellow geeks, and they well and truly succeeded. 

It might look a bit different, but the new MagPi is still very much a magazine for and by the Raspberry Pi community. It’s also still freely available under a Creative Commons license, so you can download the PDF edition free every issue to share and remix.

The MagPi is now a whopping 70 pages and includes a mix of news, reviews, features and tutorials for enthusiasts of all ages. Issue 31 is just a taste of what we’ve got in store. Over the coming months we’ll be showing you how the Raspberry Pi can power robots, fly to the edge of space and even cross the Atlantic!

The biggest thanks, of course, has to go to Ash, Will, Ian, Aaron and everybody else – there are dozens of you – who has worked on The MagPi since the beginning. You’ve made something absolutely remarkable, and we promise to look after The MagPi just as well as you have done.

So – want to see the new issue? Here it is! Click to find a download page.

31

 

http://www.raspberrypi.org/all-change-meet-the-new-magpi/

 

The Attention Machine

A new brain-scanning technique could change the way scientists think about human focus.
ratch/Shutterstock

Human attention isn’t stable, ever, and it costs us: lives lost when drivers space out, billions of dollars wasted on inefficient work, and mental disorders that hijack focus. Much of the time, people don’t realize they’ve stopped paying attention until it’s too late. This “flight of the mind,” as Virginia Woolf called it, is often beyond conscious control.

So researchers at Princeton set out to build a tool that could show people what their brains are doing in real time, and signal the moments when their minds begin to wander. And they’ve largely succeeded, a paper published today in the journal Nature Neuroscience reports. The scientists who invented this attention machine, led by professor Nick Turk-Browne, are calling it a “mind booster.” It could, they say, change the way we think about paying attention—and even introduce new ways of treating illnesses like depression.

Here’s how the brain decoder works: You lie down in a functional magnetic resonance imaging machine (fMRI)—similar to the MRI machines used to diagnose diseases—which lets scientists track brain activity. Once you’re in the scanner, you watch a series of pictures and press a button when you see certain targets. The task is like a video game—the dullest video game in the world, really, which is the point. You see a face, overlaid atop an image of a landscape. Your job is to press a button if the face is female, as it is 90 percent of the time, but not if it’s male. And ignore the landscape. (There’s also a reverse task, in which you’re asked to judge whether the scene is outside or inside, and ignore the faces.)

Megan deBettencourt/Princeton

To gauge attention from the brain, the researchers used a learning algorithm like the one Facebook uses to recognize friends’ photos. The algorithm can discern “Your Brain On Faces” versus “Your Brain On Scenes.” Whenever you start spacing out, it detects more “scene” than “face” in your brain signal, and tells the program to make the faces you are watching grow dimmer. In turn, you have to focus harder to figure out what you’re seeing, and to succeed at the “game.” In the Princeton face-scene game, college students made errors 30 percent of the time.

If this were a test, they would have gotten a D.

Ken Norman/Princeton

“Internal states are kind of ineffable,” says Turk-Browne, an associate professor of psychology at the Princeton Neuroscience Institute. “You may not know when you’re in a good or bad state. We wanted to see: If we give people feedback before they make mistakes, can they learn to be more sensitive to their own internal states?”

It turns out they can, Turk-Browne says. The key is that, for some subjects, the pictures were controlled not by their own brains, but by someone else’s: meaningless jitter. Of the 16 subjects who got their own brain feedback, 11 said they felt they were making the pictures clearer by focusing, as opposed to four of 16 who watched the placebo feedback. What the scientists found is that only people whose own brains drove the images’ dimming improved their ability to focus. Paying attention, in other words, is like learning basketball or French: Good old-fashioned practice matters.

“I think what’s exciting about this finding,” explains Turk-Browne, “is the idea that certain aspects of cognition like attention are only partly consciously accessible. So, if we can directly access people’s mental states with real time fMRI, we can give them more information than they could get from their own mind.”

* * *

Neuroscientists have been reading brain patterns with computer programs like this for just over a decade. Machine-learning algorithms, like the ones Google and Facebook use to recognize everything online, can hack the brain’s code, too: essentially software for reading brain scans. Given samples of neural patterns—your brain imagining faces, say, versus your brain picturing places—a decoder is trained to tell whether you are remembering a face (Jennifer Aniston, President Obama) or a location (the Hollywood sign, the White House). A prior study by researchers at the memory lab of professor Ken Norman, a co-developer of the attention tool, read out these categories from people’s brains as they freely recalled pictures they had studied earlier. Similar work has “decoded” what people see, attend to, learn, remember falsely, and dream. What’s new and remarkable now is how fast neural decoding is happening. Machines today can harness brain activity to drive what a person sees in real time.

“The idea that we could tell anything about a person’s thoughts from a single brain snapshot was such a rush,” Norman recalls of the early days, over a decade ago. “Certainly the kinds of decoding we are doing now can be done much faster.”

Here is how Princeton’s current scanner sees a human brain: First, it divides a brain image into around 40,000 cubes, called voxels, or 3-D pixels. This basic unit of fMRI is a 3 millimeter by 3 millimeter cube of brain. So, the neural pattern representing any mental state—from how you feel when you smell your wife’s perfume to suicidal despair—is represented by this matrix. The same neural code for, say, Scarlett Johansson, will represent her in your memory, or as you talk to her on the phone, or in your dreams. The decoding approach, first pioneered in 2001 by the neuroscientist James Haxby and colleagues at Princeton, is known technically as “multi-voxel pattern analysis,” or MVPA. This “decoding” is distinct from the more common, less sophisticated form of fMRI analysis that gets a lot of attention in the media, the kind that shows what parts of the brain “light up” when a person does a task, relative to a control. “Though fMRI is not very cheap to use, there may be a certain advantage of neurofeedback training, compared to pure behavioral training,” suggests Kazuhisa Shibata, an assistant professor at Brown University, “if this work is shown to generalize to other tasks or domains.”

That is a big if. One caveat to the neurofeedback trend is that many “brain-training” tasks, including popular corporate games like Lumosity, which promise to improve brain function, are roundly criticized by neuroscientists: People trained on them often only improve at the games themselves. They don’t actually get better at paying attention, remembering things, or controlling mood more generally. As Johns Hopkins neuroscientist and memory expert David Linden points out in his recent book, Touch, physical exercise is one of the few interventions shown to improve general cognition reliably, far better than most “brain games.” So neurofeedback has a high bar to clear. That said, Shibata’s work on vision, one of few other successful examples of real-time fMRI, showed visual learning can be driven by brain feedback.

Other experts note the Princeton team’s technical advance, but with some skepticism. “The setup for monitoring attentional states is impressive,” says Yukiyasu Kamitani, a pioneer of neural decoding at ATR Computational Neuroscience Labs and professor at Nara Institute of Science and Technology, “although the behavioral effects of neurofeedback they found are marginal.”

Let’s not get carried away just yet, in other words. But as the neurofeedback technique improves, it is likely to become widely used. When effective, the potential to link brain patterns directly to behavior is unprecedented for human neuroscience.

Neurofeedback training could work the brain almost as muscles are worked in physical therapy, as Shibata and his Kyoto colleagues published in a 2011 Science study. The process, which the authors called “inception” in homage to the 2010 film about dreams implanted in people’s brains, made a big splash when it came out. The only instruction inside the fMRI? “Somehow regulate activity in the posterior part of your brain to make the solid green disc… as large as possible.” Without conscious knowledge of what they were learning, subjects managed to make the green disc grow. Shibata trained a decoder to work like the “faces” versus “landscapes” experiment, only he used three different orientations of line gratings for images. Then, while people were watching the green disc, he “rewarded” them by making the disc grow when their brains responded to one of the three patterns of lines. In turn, they became better at seeing the patterns that they associated with the green disc growing.

As Turk-Browne points out, this sort of learning is often unconscious. Which is why some scientists believe tools like the attention machine at Princeton may soon help not only to better understand when the brain goes wrong, but even to treat mental illness.

If you’ve ever known someone with ADHD or depression, you know how these disorders affect attention, holding hostage the senses, focusing them relentlessly on gloomy perspectives. Depression is especially pernicious: My boss frowned at me; my girlfriend dissed my cooking; nobody is talking to me at this party, I’m so boring. The Princeton group, in collaboration with the University of Texas, Austin, hopes to leverage its mental prosthetic to curb this negative attention bias. Instead of noticing the (perhaps imagined) frown on someone’s face, the tool might train depressed brains to focus on the information they are being told.

“Why do some people recover from sad mood, while others stay stuck for months or years?” asks Chris Beevers, a professor of psychology at the University of Texas, Austin, and one of the co-authors of this pilot work. What interests him and his colleagues about the attention tool is its potential to “target mechanisms that maintain sad mood, and reverse them,” a trend he calls precision medicine. “From the clinician’s perspective, we’d like to tailor treatment to an individual’s neural function: not treating every depressed patient the same.”

Today, mental illness is usually treated in two ways: drugs and behavioral therapy. Only around 50 percent of depression patients respond to any drug,according to the National Institutes of Mental Health. The nearly 20 percent of Americans with mental disorders, and the roughly half who will experience one in their lifetimes, are stuck with checklists—How anxious are you, on a scale of 1 to 10? Are you hearing voices? How’s your sleep?—when what they need, some scientists believe, is direct access to the brain. Psychologists like Beevers envision a future in which patients would be evaluated through quantitative tests of traits like memory and attention bias, to determine symptoms to target, and tailor treatment for each patient’s needs. Those who have “difficulty disengaging from emotional content,” as Beevers puts it, may be good candidates for neurofeedback training.

This sort of training has its roots in today’s talk therapy. People with anxiety are taught to identify feelings that may spiral out of control. But as much as cognitive-behavioral therapy trains the conscious mind to catch rumination, compulsion or panic, and nip them in the bud, other emotional tendencies are completely outside of deliberate control—habits of the brain. So the Princeton-Austin team is using real-time fMRI to rein in the brain’s biases. Depressed subjects are shown a collection of faces, some sad, overlaid on scenes they are told to judge: outdoors or inside? When the machine detects that the viewer is focusing more on faces than scenes, the sad face grows clearer, the scene harder to see. This prompts self-correction by focusing on the scene instead. Over time, the theory goes, subjects get better at not being drawn in by sad faces, at focusing on the task at hand. The hope is that whatever in a depressed person’s brain draws her toward sad things may gradually learn to regulate itself, the researchers say. That’s the hope, anyway. There’s still the question of whether such therapies could treat depression broadly—or, like brain games, just teach people how to excel at the treatment exercise.

The depression research is still ongoing—the authors stress the need for many more subjects and controls—but data reported at November’s Society For Neuroscience conference offered a promising proof of concept. The future of this work, in any case, is provocative to imagine.

“We still haven’t plumbed the depths of what information can be mined from fMRI,” the memory researcher Norman says. “We’re over the honeymoon period, but we’re still finding ways to squeeze more information out of the signal. Now we can pick up on not just ‘How awake are you?’ but ‘What plans are in your head?,’ ‘What are you attending to?’ There’s never been a technology that allows us to get such

http://www.theatlantic.com/technology/archive/2015/02/the-attention-machine/385284/

Why reading and writing on paper can be better for your brain

Some tests show that reading from a hard copy allows better concentration, while taking longhand notes versus typing onto laptops increases conceptual understanding and retention
Man writing with quill
Some tests show that reading from a hard copy allows better concentration, while taking longhand notes versus typing onto laptops increases conceptual understanding and retention
My son is 18 months old, and I’ve been reading books with him since he was born. I say “reading”, but I really mean “looking at” – not to mention grasping, dropping, throwing, cuddling, chewing, and everything else a tiny human being likes to do. Over the last six months, though, he has begun not simply to look but also to recognise a few letters and numbers. He calls a capital Y a “yak” after a picture on the door of his room; a capital H is “hedgehog”; a capital K, “kangaroo”; and so on.

Reading, unlike speaking, is a young activity in evolutionary terms. Humans have been speaking in some form for hundreds of thousands of years; we are born with the ability to acquire speech etched into our neurones. The earliest writing, however, emerged only 6,000 years ago, and every act of reading remains a version of what my son is learning: identifying the special species of physical objects known as letters and words, using much the same neural circuits as we use to identify trees, cars, animals and telephone boxes.

It’s not only words and letters that we process as objects. Texts themselves, so far as our brains are concerned, are physical landscapes. So it shouldn’t be surprising that we respond differently to words printed on a page compared to words appearing on a screen; or that the key to understanding these differences lies in the geography of words in the world.

For her new book, Words Onscreen: The Fate of Reading in a Digital World, linguistics professor Naomi Baron conducted a survey of reading preferences among over 300 university students across the US, Japan, Slovakia and Germany. When given a choice between media ranging from printouts to smartphones, laptops, e-readers and desktops, 92% of respondents replied that it was hard copy that best allowed them to concentrate.

This isn’t a result likely to surprise many editors, or anyone else who works closely with text. While writing this article, I gathered my thoughts through a version of the same principle: having collated my notes onscreen, I printed said notes, scribbled all over the resulting printout, argued with myself in the margins, placed exclamation marks next to key points, spread out the scrawled result – and from this landscape hewed a (hopefully) coherent argument.

What exactly was going on here? Age and habit played their part. But there is also a growing scientific recognition that many of a screen’s unrivalled assets – search, boundless and bottomless capacity, links and leaps and seamless navigation – are either unhelpful or downright destructive when it comes to certain kinds of reading and writing.

Across three experiments in 2013, researchers Pam Mueller and Daniel Oppenheimer compared the effectiveness of students taking longhand notes versus typing onto laptops. Their conclusion: the relative slowness of writing by hand demands heavier “mental lifting”, forcing students to summarise rather than to quote verbatim – in turn tending to increase conceptual understanding, application and retention.

In other words, friction is good – at least so far as the remembering brain is concerned. Moreover, the textured variety of physical writing can itself be significant. In a 2012 study at Indiana University, psychologist Karin James tested five-year-old children who did not yet know how to read or write by asking them to reproduce a letter or shape in one of three ways: typed onto a computer, drawn onto a blank sheet, or traced over a dotted outline. When the children were drawing freehand, an MRI scan during the test showed activation across areas of the brain associated in adults with reading and writing. The other two methods showed no such activation.

Similar effects have been found in other tests, suggesting not only a close link between reading and writing, but that the experience of reading itself differs between letters learned through handwriting and letters learned through typing. Add to this the help that the physical geography of a printed page or the heft of a book can provide to memory, and you’ve got a conclusion neatly matching our embodied natures: the varied, demanding, motor-skill-activating physicality of objects tends to light up our brains brighter than the placeless, weightless scrolling of words on screens.

In many ways, this is an unfair result, effectively comparing print at its best to digital at its worst. Spreading my scrawled-upon printouts across a desk, I’m not just accessing data; I’m reviewing the idiosyncratic geography of something I created, carried and adorned. But I researched my piece online, I’m going to type it up onscreen, and my readers will enjoy an onscreen environment expressly designed to gift resonance: a geography, a context. Screens are at their worst when they ape and mourn paper. At their best, they’re something free to engage and activate our wondering minds in ways undreamt of a century ago.

Above all, it seems to me, we must abandon the notion that there is only one way of reading, or that technology and paper are engaged in some implacable war. We’re lucky enough to have both growing self-knowledge and an opportunity to make our options as fit for purpose as possible – as slippery and searchable or slow with friction as the occasion demands.

I can’t imagine teaching my son to read in a house without any physical books, pens or paper. But I can’t imagine denying him the limitless words and worlds a screen can bring to him either. I hope I can help him learn to make the most of both – and to type/copy/paste/sketch/scribble precisely as much as he needs to make each idea his own.

http://www.theguardian.com/technology/2015/feb/23/reading-writing-on-paper-better-for-brain-concentration

create new apps for Pebble Time

Thursday, Pebble released a preview of its software development kit for itsnew smartwatch platform and Pebble Time device.

With SDK 3.0, the company touts the ease with which developers will be able to create new apps for Pebble Time, or let older apps take advantage of the new color screen. The preview features documentation for the new Timeline APIs as well, an emulator to test apps and a migration guide, to help them in the process.

See also: Hear From Pebble’s Eric Migicovsky At Wearable World Congress

The kit also offers a few more details of what app makers and consumers can expect with the new software and hardware.

Time To Upgrade

The original Pebble and the new Time share many of the same fundamental specifications, which should help minimize complication. For instance, both devices feature a 144 x 168 display resolution, four buttons—with three on the right and one on the left—and the same sensors for the accelerometer and compass. So there’s no need for developers to remap the inputs in their apps or adapt to a different screen size.

While both Pebble and Pebble Time have the same number of buttons and internal sensors, the new device will feature a new microphone for developers to play with.
While both Pebble and Pebble Time have the same number of buttons and internal sensors, the new device will feature a new microphone for developers to play with.

 As for differences, the new watch will come bearing several. Time’s processor boasts a higher CPU frequency, 100 MHz compared to the original’s 64 MHz, which could offer snappier performance. The main hardware attractions, of course, are Time’s 64-color e-paper display and microphone, which the black-and-white previous model didn’t have.

The new developer kit also raised the limit on app sizes, more than doubling the limit in the old version, from 24k to 64k. The maximum cap on related resources jumped too, going from 96k to 256k.

See also: Meet The New Pebble Time—Though Getting One Will Take … Time

Pebble’s desire to reduce complexity and offer improvements seem evident in these changes. However, they also suggest that older hardware may not support the new and beefier Time apps so well.

Moving Forward, But Looking Back

Pebble offers a few resources to help orient app creators, including an emulator—which can be handy for testing, considering no one outside of the company actually has a Pebble Time device.

A company spokesperson explained that “the SDK now includes an entire emulator (in the cloud or on your local machine) so you can try out your apps before you get your Pebble Time.”

The kit also features a set of developer guides, including a “migration guide” (for updating old apps) and a backwards-compatibility guide. The latter covers tools in SDK 3.0 that let developers write or make changes once, and then compile two separate versions of the app tailored for each device: “By catering for both cases, you can ensure your app will run and look good on both platforms with minimal effort,” the guide reads. “This avoids the need to maintain two Pebble projects for one app.”

Apps relying on PebbleKit Android “will need to be re-compiled in Android Studio (or similar) with the PebbleKit 3.0 library,” but developers don’t have to make changes to their code. PebbleKit iOS apps won’t have to be re-compiled.

A diagram of how Timeline will work with apps on the new Pebble Time
A diagram of how Timeline will work with apps on the new Pebble Time

The other key component in the SDK is the Timeline guide, which explains how apps will work with Pebble’s new chronological structure for app data. The main idea involves putting data from multiple apps into one easily navigable place. In this context, developers will be able to “pin” certain types of data to this construct.

Pebble app developers may be wise to jump on these tools quickly, to make sure their apps are ready when Pebble Time ships. The device may become the company’s most popular yet—its new Kickstarter project has already exceeded its first record-breaking campaign, beating that $10.3 million figure. It’s now on track to become another record-breaker.

See also: Pebble Time Hits $1M On Kickstarter In Under An Hour

The current most-funded Kickstarter project set a benchmark of $13.3 million. As of this writing, Pebble Time has nabbed nearly 50,000 backers who have pledged more than $10.5 million in two days, with 29 more to go. After the campaign closes, the first units will ship near the end of May to those Kickstarter backers.

In other words, it’s time to get those apps ready.

Images courtesy of Pebble

http://readwrite.com/2015/02/26/pebble-time-sdk-preview-now-available

 

Tesla tops US best car list again

A Model S electric vehicle at a supercharger station in New Jersey, US on December 11, 2014 Photo: CFP
Consumer Reports named the luxury electric Tesla S its top car for the second straight year, calling the market-shaking sedan a “technological tour de force.”

The annual top-10 ranking, based on independent road performance, reliability and crash tests, also gave top category honors to three Subaru models, a landmark sweep for the small Japanese maker.

But the Japanese overall won only six slots, their lowest tally since the influential consumer ratings magazine began the list 19 years ago.

Instead, three American brands – the Tesla, the Buick Regal and the Chevrolet Impala – held their own in the top 10.

“Detroit vehicles are breaking through in new categories,” said Mark Rechtin of Consumer Reports’ car ratings team.

“Many have come a long way in performance, technology, and improved reliability.”

“These are the cars that ignite the gasoline in our veins. That we trust. Respect. And love,” the magazine said.

“They also happen to score high in our reliability ratings and shine in automotive crash tests.”

The $80,000-plus plug-in Tesla again captured the magazine’s fancy as its overall top pick, and not just for the car’s market-beating 265 mile (426 kilometer) range without a charge.

Consumer Reports praised the ability to update the Tesla’s software over the Internet, and that the company surmounted early technical problems, including a handful of fires that started from objects on the road surface kicking into its underside battery pack.

With those problems aside, Consumer Reports lauded the Tesla S’ “magnificence and sheer technological arrogance.”

Subaru, whose all-wheel-drive models are popular with both sport driving fans and outdoorists, led three categories of the 10.

The Impreza led the compact car group as a “strong value,” and the Legacy was the best mid-sized sedan it “exceeds those drab, rental-car expectations… also has the best ride among its peers.”

And the Forester grabbed one of the hottest market segments, the small sport-utility vehicles.

“Subaru has nailed the recipe of combining practicality, safety, fuel economy, value, and interior accommodations,” said the consumer ratings magazine.

The Audi A6 came in tops for a luxury car, and, unsurprisingly, on the green carside Toyota’s hybrid Prius took the honors for the 12th straight year.

Toyota’s Highlander was best mid-sized SUV and the Honda Odyssey the winner among minivans.

But the surprise was the inclusion of the Chevrolet Impala as the best large car, beating out the Toyota Avalon and Lexus ES 350.

Only slightly less surprising was the Buick Regal as the leading sports sedan. It is better known elsewhere as the Opel Insignia, with its German roots emphasized by Consumer Reports.

“Close your eyes, and you’ll think you’re driving an Audi – a very good Audi at that,” it gushed.

http://www.globaltimes.cn/content/909234.shtml

Finished Apple Watch Expected to be Showcased at”Spring Forward” Mar. 9 Event

Jason Mick (Blog) – February 26, 2015 12:17 PM

Apple may also add Broadwell chip options to the MacBook Pro and Air lines, but the star of the show is expected to be the Apple Watch

Apple, Inc. (AAPL) has sent out invites to exclusive media and developer partners for an event held on March 9.  Like any Apple event, there’s a great deal of hype and speculation about what product launches, demoes, or refreshes might be on the agenda.

The event is titled “Spring Forward”, an homage to the Daylight Savings Time change that occurs in the month of March.  The chronological reference, combined with Apple CEO Timothy Cook‘s assertion that the upcoming Apple Watch wearable would ship to customers in April 2015, strongly suggests that the event will cover the Apple Watch.

The Apple watch was first demoed at a highly anticipated event last September, but that was just a tease at Apple’s general plan for the device and a pitch to bring developers onboard to create apps for the device.  At the new event, Apple will likely fill in the blanks in terms of iOS companion features (to be included in the iOS 8.2 or possible 8.3 updates) and details of the watch’s functionality.  A finished device spec and the price points for various variants — still unknown — will also likely be revealed.Apple’s watch is anticipated to be one of the most expensive smartwatches on the market, with an introductory price of $349 USD, and with luxury models price substantially higher than that.  Apple’s smartwatch also reportedly has been forced to scale back its scope, dropping sophisticated, but unreliable health sensors.

Apple watch pair

It also is coming late, with Android wearables already enjoying an established sales stream. The predecessors to the current crop of devices on Google Inc.’s (GOOGAndroid Wear watch platform began to trickle out back in 2013.  And last year their successors saw sales success slowly heating up in 2014.

That said, Apple’s enviable marketing machine could sell virtually anything to customers.  When you add in the utility that Apple Pay in a watch form factor would provide, Apple shouldn’t have trouble quickly moving up the ranks of smartwatch sellers.

MacBook pro Broadwell
The MacBook Pro & Air notebook lines could get a boost from Intel’s Broadwell chips at the event, as well.


Apple’s MacBook Air and Pro notebook lines are also due for a refresh to Intel Corp. (INTC) 14 nm Broadwell processors, with the higher end Retina models likely getting more powerful Core i-Series processors and with lower end MacBook Air models getting the fanless Core M chips.

http://www.dailytech.com/Finished+Apple+Watch+Expected+to+be+Showcased+atSpring+Forward+Mar+9+Event/article37201c.htm#sthash.itJwyPyZ.dpuf

Cancer risk linked to DNA ‘wormholes’

February 25, 2015

Single-letter genetic variations within parts of the genome once dismissed as “junk DNA” can increase cancer risk through remote effects on far-off genes, new research by scientists at The Institute of Cancer Research, Londonshows.

The researchers found that DNA sequences within “gene deserts” — so called because they are completely devoid of genes — can regulate gene activity elsewhere by forming DNA loops across relatively large distances.

The study helps solve a mystery about how genetic variations in parts of the genome that don’t appear to be doing very much can increase cancer risk.

Their study, published in Nature Communications, also has implications for the study of other complex genetic diseases.

The researchers developed a technique called Capture Hi-C to investigate long-range physical interactions between stretches of DNA – allowing them to look at how specific areas of chromosomes interact physically in more detail.

The researchers assessed 14 regions of DNA that contain single-letter variations previously linked to bowel cancer risk. They detected significant long-range interactions for all 14 regions, confirming their role in gene regulation.

“Our new technique shows that genetic variations are able to increase cancer risk through long-range looping interactions with cancer-causing genes elsewhere in the genome,” study leader Professor Richard Houlston, Professor of Molecular and Population Genetics at The Institute of Cancer Research, London said.

“It is sometimes described as analogous to a wormhole, where distortions in space and time could in theory bring together distant parts of the universe.”

The research was funded by the EU, Cancer Research UK, Leukaemia & Lymphoma Research, and The Institute of Cancer Research (ICR).

 

http://www.kurzweilai.net/embargo-1000-uk-time-thursday-19-february-2015-cancer-risk-linked-to-dna-wormholes

Graphene shown to neutralize cancer stem cells

February 26, 2015

University of Manchester scientists have used graphene oxide to target and neutralize cancer stem cells (CSCs) while not harming other cells.

This new development opens up the possibility of preventing or treating a broad range of cancers, using a non-toxic material.

In combination with existing treatments, this finding could eventually lead to tumor shrinkage as well as preventing the spread of cancer and its recurrence after treatment, according to the team of researchers led by Professor Michael Lisanti and Aravind Vijayaraghavan, writing in an open-access paper in the journal Oncotarget.

“Cancer stem cells possess the ability to give rise to many different tumor cell types,” said Lisanti, the Director of the Manchester Centre for Cellular Metabolism within the University’s Institute of Cancer Sciences. CSCs are responsible for the spread of cancer within the body — known as metastasis — which is responsible for 90% of cancer deaths.

“They also play a crucial role in the recurrence of tumors after treatment. This is because conventional radiation and chemotherapies only kill the ‘bulk’ cancer cells, but do not generally affect the CSCs.”

Targeted delivery

“Graphene oxide can readily enter or attach to the surface of cells, making it a candidate for targeted drug delivery,” said Vijayaraghavan. “In this work, surprisingly, it’s the graphene oxide itself that has been shown to be an effective anti-cancer drug.

“Cancer stem cells differentiate to form a small mass of cells known as a tumor-sphere. We saw that the graphene oxide flakes prevented CSCs from forming these, and instead forced them to differentiate into non-cancer stem-cells.

“Naturally, any new discovery such as this needs to undergo extensive study and trials before emerging as a therapeutic. We hope that these exciting results in laboratory cell cultures can translate into an equally effective real-life option for cancer therapy.”

May be effective for all types of cancer

The team prepared a variety of graphene oxide formulations for testing against six different cancer types — breast, pancreatic, lung, brain, ovarian and prostate.  The flakes inhibited the formation of tumor-sphere formation in all six types, suggesting that graphene oxide could be effective across all, or at least a large number of, different cancers, by blocking processes that take place at the surface of the cells. The researchers suggest that, used in combination with conventional cancer treatments, this may deliver a better overall clinical outcome.

The researchers noted that the research results also show that graphene oxide is not toxic to healthy cells, which suggests that this treatment is likely to have fewer side-effects if used as an anti-cancer therapy.

Andre Geim and Konstantin Novoselov at the University of Manchester won the Nobel Prize in Physics in 2010 for “groundbreaking experiments regarding the two-dimensional material graphene.”

 

http://www.kurzweilai.net/graphene-shown-to-neutralize-cancer-stem-cells