http://www.kurzweilai.net/an-artificial-synapse-for-future-miniaturized-portable-brain-on-a-chip-devices

An artificial synapse for future miniaturized portable ‘brain-on-a-chip’ devices

MIT engineers plan a fingernail-size chip that could replace a supercomputer
January 22, 2018

Biological synapse structure (credit: Thomas Splettstoesser/CC)

MIT engineers have designed a new artificial synapse made from silicon germanium that can precisely control the strength of an electric current flowing across it.

In simulations, the researchers found that the chip and its synapses could be used to recognize samples of handwriting with 95 percent accuracy. The engineers say the new design, published today (Jan. 22) in the journal Nature Materials, is a major step toward building portable, low-power neuromorphic chips for use in pattern recognition and other machine-learning tasks.

Controlling the flow of ions: the challenge

Researchers in the emerging field of “neuromorphic computing” have attempted to design computer chips that work like the human brain. The idea is to apply a voltage across layers that would cause ions (electrically charged atoms) to move in a switching medium (synapse-like space) to create conductive filaments in a manner that’s similar to how the “weight” (connection strength) of a synapse changes.

There are more than 100 trillion synapses (in a typical human brain) that mediate neuron signaling in the brain, strengthening some neural connections while pruning (weakening) others — a process that enables the brain to recognize patterns, remember facts, and carry out other learning tasks, all at lightning speeds.

Instead of carrying out computations based on binary, on/off signaling, like current digital chips, the elements of a “brain on a chip” would work in an analog fashion, exchanging a gradient of signals, or “weights” — much like neurons that activate in various ways (depending on the type and number of ions that flow across a synapse).

But it’s been difficult to control the flow of ions in existing synapse designs. These have multiple paths that make it difficult to predict where ions will make it through, according to research team leader Jeehwan Kim, PhD, an assistant professor in the departments of Mechanical Engineering and Materials Science and Engineering, a principal investigator in MIT’s Research Laboratory of Electronics and Microsystems Technology Laboratories.

“Once you apply some voltage to represent some data with your artificial neuron, you have to erase and be able to write it again in the exact same way,” Kim says. “But in an amorphous solid, when you write again, the ions go in different directions because there are lots of defects. This stream is changing, and it’s hard to control. That’s the biggest problem — nonuniformity of the artificial synapse.”

Epitaxial random access memory (epiRAM)

(Left) Cross-sectional transmission electron microscope image of 60 nm silicon-germanium (SiGe) crystal grown on a silicon substrate (diagonal white lines represent candidate dislocations). Scale bar: 25 nm. (Right) Cross-sectional scanning electron microscope image of an epiRAM device with titanium (Ti)–gold (Au) and silver (Ag)–palladium (Pd) layers. Scale bar: 100 nm. (credit: Shinhyun Choi et al./Nature Materials)

So instead of using amorphous materials as an artificial synapse, Kim and his colleagues created an new “epitaxial random access memory” (epiRAM) design.

They started with a wafer of silicon. They then grew a similar pattern of silicon germanium — a material used commonly in transistors — on top of the silicon wafer. Silicon germanium’s lattice is slightly larger than that of silicon, and Kim found that together, the two perfectly mismatched materials could form a funnel-like dislocation, creating a single path through which ions can predictably flow.*

This is the most uniform device we could achieve, which is the key to demonstrating artificial neural networks,” Kim says.

Testing the ability to recognize samples of handwriting

As a test, Kim and his team explored how the epiRAM device would perform if it were to carry out an actual learning task: recognizing samples of handwriting — which researchers consider to be a practical test for neuromorphic chips. Such chips would consist of artificial “neurons” connected to other “neurons” via filament-based artificial “synapses.”

Image-recognition simulation. (Left) A 3-layer multilayer-perception neural network with black and white input signal for each layer in algorithm level. The inner product (summation) of input neuron signal vector and first synapse array vector is transferred after activation and binarization as input vectors of second synapse arrays. (Right) Circuit block diagram of hardware implementation showing a synapse layer composed of epiRAM crossbar arrays and the peripheral circuit. (credit: Shinhyun Choi et al./Nature Materials)

They ran a computer simulation of an artificial neural network consisting of three sheets of neural layers connected via two layers of artificial synapses, based on measurements from their actual neuromorphic chip. They fed into their simulation tens of thousands of samples from the MNIST handwritten recognition dataset**, commonly used by neuromorphic designers.

They found that their neural network device recognized handwritten samples 95.1 percent of the time — close to the 97 percent accuracy of existing software algorithms running on large computers.

A chip to replace a supercomputer

The team is now in the process of fabricating a real working neuromorphic chip that can carry out handwriting-recognition tasks. Looking beyond handwriting, Kim says the team’s artificial synapse design will enable much smaller, portable neural network devices that can perform complex computations that are currently only possible with large supercomputers.

“Ultimately, we want a chip as big as a fingernail to replace one big supercomputer,” Kim says. “This opens a stepping stone to produce real artificial intelligence hardware.”

This research was supported in part by the National Science Foundation. Co-authors included researchers at Arizona State University.

* They applied voltage to each synapse and found that all synapses exhibited about the same current, or flow of ions, with about a 4 percent variation between synapses — a much more uniform performance compared with synapses made from amorphous material. They also tested a single synapse over multiple trials, applying the same voltage over 700 cycles, and found the synapse exhibited the same current, with just 1 percent variation from cycle to cycle.

** The MNIST (Modified National Institute of Standards and Technology database) is a large database of handwritten digits that is commonly used for training various image processing systems and for training and testing in the field of machine learning. It contains 60,000 training images and 10,000 testing images. 


Abstract of SiGe epitaxial memory for neuromorphic computing with reproducible high performance based on engineered dislocations

Although several types of architecture combining memory cells and transistors have been used to demonstrate artificial synaptic arrays, they usually present limited scalability and high power consumption. Transistor-free analog switching devices may overcome these limitations, yet the typical switching process they rely on—formation of filaments in an amorphous medium—is not easily controlled and hence hampers the spatial and temporal reproducibility of the performance. Here, we demonstrate analog resistive switching devices that possess desired characteristics for neuromorphic computing networks with minimal performance variations using a single-crystalline SiGe layer epitaxially grown on Si as a switching medium. Such epitaxial random access memories utilize threading dislocations in SiGe to confine metal filaments in a defined, one-dimensional channel. This confinement results in drastically enhanced switching uniformity and long retention/high endurance with a high analog on/off ratio. Simulations using the MNIST handwritten recognition data set prove that epitaxial random access memories can operate with an online learning accuracy of 95.1%.

https://9to5mac.com/2018/01/22/feature-request-activity-sharing-improvements/

Feature Request: How Activity Sharing for Apple Watch could improve

Apple Watch Activity Sharing is a great way to share your fitness progress with friends and stay motivated. The feature lets you easily share completed workouts, achievements, and more with your Apple Watch friends.

Activity Sharing has its limitations, however, and I’m hoping watchOS 5 and iOS 12 can address them with a few changes.

The first limitation is exactly how many people you’re able to add to Activity Sharing. I recently invited 9to5Mac Happy Hour listeners to connect with me through Activity Sharing as motivation, but I quickly discovered the feature has a hard rule of 40 connections.

That’s increased from 25 accounts originally, but both limits are not very useful for larger groups.

My guess is Apple caps the number of connections to prevent the Activity Sharing from becoming overwhelming for the user, but there are already controls to help manage alerts today. You can mute alerts from Activity Sharing on a person-by-person basis from the Activity app on the iPhone.

With muting already available, I wouldn’t be against connecting to a few hundred accounts to build a small, competitive community of Apple Watch users. The real shame is that I’m not able to do Activity Sharing with users who have zero connections on their end because I’ve hit the 40 user limit myself.

Limit aside, Activity Sharing could be smarter in the future. Today, alerts are grouped together intelligently and periodically so you aren’t spammed, but you can’t set which types of alerts you want to receive.

I’m personally motivated by completed workout alerts more than anything else and would opt out of alerts for closed activity rings and achievements if possible.

Activity Sharing could also support groups which currently don’t exist today. Ideally, I’d like to set up three groups, friends, family, and podcast. I can see others using a grouping feature for various teams and clubs as well.

The Activity app also includes a feature that lets you send a single group message to everyone you use Activity Sharing with, but the feature could use some polishing. I could really see this being beneficial for groups so you could message everyone or just select users.

Finally, I’d really love to see Activity Sharing mature into more of an open platform for competitions.

Apple currently relies on a third-party app called Challenges for its internal activity challenges. I haven’t used the app myself yet — it requires a paid subscription to set up a challenge — but at a glance it looks like exactly what I’d love to see from Apple directly.

The app is based around the three Activity rings and includes a concept of teams, specific target goals, and ranking. Activity Sharing is by design a very light weight version of Challenges, but I could see it evolving over time to be a more polished version.

Fingers crossed that Apple has plans for Activity Sharing in the future. The version that debuted with watchOS 3 and iOS 10 was a good start, but it’s a useful feature that could dramatically benefit from new features in future updates.


Subscribe to 9to5Mac on YouTube for more Apple news:

http://vancouversun.com/business/local-business/vancouver-restaurant-moves-toward-cash-free-transactions-as-diners-ditch-paper

Vancouver restaurant moves toward cash-free transactions as diners ditch paper

Cold, hard cash won’t get you delicious, hot noodles at this Vancouver restaurant. While cash-only restaurants are a common sight around town, one restaurateur is going cash-free instead.

Two Marutama Ramen locations have already switched to accept credit, debit and Apple Pay transactions only, while its original West End spot will follow suit beginning Feb. 1.

“I don’t use cash, I use credit card or Apple Pay, and 95 per cent of our customers use cards or smartphone (payment),” said Tatsushi Koizumi, co-owner and general manager of the Japanese ramen chain. “So I thought it’s about time to switch to the next generation.”

Weekday noon-hour crowds of office workers hoping to get in and out quickly aren’t unexpected at restaurants. It makes sense then to turn to cash-free transactions as a way to speed up the lunch-hour turnover and free up staff — especially at eateries like Marutama Ramen, where ingredients also require more time to prep.

Koizumi spoke to Postmedia News recently at the West End location of Marutama Ramen, where kitchen staff were preparing for a weekday opening. The noodles and toripai tan (chicken broth) are made fresh daily in-house.

“We take more labour and time for the creation (of ingredients) and then we found that we use so much time to go to the bank and make change or count it every night at the end of the day,” said Koizumi. “So I wanted to cut that labour and prioritize creation.”

Not to mention, it does away with the possibility of accounting errors and security concerns around handling cash.

The chain’s Main Street location adopted cash-free transactions as soon as it opened last fall. When that was well-received among customers, Koizumi implemented the same policy at their Robson Street location, and will soon introduce it at their Bidwell restaurant as well. There is signage at each restaurant alerting customers to the new transaction policy and if anyone isn’t happy with it, Koizumi said it’s “customer’s choice” if they’d rather head elsewhere.

Koizumi said he has seen many cash-only restaurants around town, but hasn’t yet heard of other cash-free restaurants, which suggests Marutama Ramen is among the early adopters in Vancouver to use a cash-free approach.

Tips can be added on when customers pay by card or tap, and are still divided among staff the way traditional cash tips would be. If a customer’s card is declined, but they’re able to pay by cash, Koizumi said the restaurant will make exceptions and take cash, but that no change will be given.

And if there’s a tech glitch or if payment systems go down? If that ever happens, Koizumi said meals would be on the house — but he’s skeptical it will come to that.

“Internet is strong and stable enough for me to decide on a cash-free system,” said Koizumi. “I haven’t experienced an internet shutdown in four years.”

Ian Tostenson, president of the B.C. Restaurant and Food Services Association, said he’s not heard of other restaurants going cash-free, but said “it makes complete sense” considering how diners’ habits have changed.

“I can’t even remember the last time I was with anybody or anywhere that someone pulled out cash to pay for anything in a restaurant,” he said. “It’s not a deal-breaker in my opinion and I think the industry would be quite happy to get rid of cash.”

Tostenson noted his association has partnered with an company that has developed an app that allows diners to pay by phone or by app right at their table instead of waiting for a bill during busy periods.

Vancouver, BC: JANUARY 17, 2018 – Tatsushi Koizumi is the general manager of the Marutama Ramen chain of three restaurants in Vancouver, BC. By next month all three of the restaurants will be cash-free, accepting only debit and credit cards for payment. JASON PAYNE / PNG

In Seattle, a high-end Starbucks has been experimenting with cash-free transactions since last week. The popular coffee chain has already had a smash success with its Starbucks app, which allows customers to place and pay for their coffee orders ahead of time and then simply stop in at the counter to pick-up their drink.

So far, the Second Avenue and University location is the only Starbucks in Seattle to test the no-cash policy.

Also in Seattle, a new Amazon Go grocery store is set to open this week. The store has no cashiers, no line-ups and no cash registers but it does have Skytrain-style gates that only allow customers to enter if they have the Amazon app on their smartphones.

Customers shop by simply taking what they need off a shelf and placing it into their shopping bag; cameras overhead are able to scan each item from afar and add that item to the customer’s virtual shopping cart. When they leave the store through the gates, the purchases are automatically charged to the customer’s Amazon account within minutes.

According to a 2017 report from Payments Canada, the number of cash transactions have trended downward since 2011 while credit-card and debit payments have gone up.

Cash transactions accounted for 31.2 per cent of all 2016 transactions, down 10.6 per cent from 2011. Meanwhile, debit accounted for 25.6 per cent of all transactions in 2016, up 5.2 per cent from 2011, and credit cards represented 22.5 per cent of 2016 transactions, up 5.6 per cent from 2011.

Contactless or tap transactions — such as cards with chips and Apple Pay — are also on the rise. In 2016, tap payments increased by 81 per cent over the previous year, with a reported 2,089,000 tap transactions recorded across Canada. The report notes that chip-and-pin transactions accounted for 21 per cent of all point-of-sale card payments in 2016, more than double the seven per cent recorded in 2014.

The Bank of Canada notes there’s no legal requirement for a business to accept cash and that businesses are free to develop their own policies. The only requirement is that both parties must agree on the payment method.

http://www.cbc.ca/news/technology/amazon-go-grocery-store-1.4497862

Amazon’s 1st high-tech grocery store opens to the public

Shoppers scan their smartphone at a turnstile, pick out the items they want and leave

Thomson Reuters Posted: Jan 22, 2018 8:18 AM ET Last Updated: Jan 22, 2018 8:29 AM ET

In this Thursday, April 27, 2017, file photo, people walk past an Amazon Go store in Seattle. More than a year after it introduced the concept, Amazon is opening its artificial intelligence-powered Amazon Go store in downtown Seattle on Monday, Jan. 22, 2018.

In this Thursday, April 27, 2017, file photo, people walk past an Amazon Go store in Seattle. More than a year after it introduced the concept, Amazon is opening its artificial intelligence-powered Amazon Go store in downtown Seattle on Monday, Jan. 22, 2018. (Elaine Thompson/Associated Press)

Amazon.com Inc will open its checkout-free grocery store to the public on Monday after more than a year of testing, the company said, moving forward on an experiment that could dramatically alter brick-and-mortar retail.

The Seattle store, known as Amazon Go, relies on cameras and sensors to track what shoppers remove from the shelves, and what they put back. Cash registers and checkout lines become superfluous — customers are billed after leaving the store using credit cards on file.

For grocers, the store’s opening heralds another potential disruption at the hands of the world’s largest online retailer, which bought high-end supermarket chain Whole Foods Market last year for $13.7 billion US. Long lines can deter shoppers, so a company that figures out how to eradicate wait times will have an advantage.

AMAZON.COM-STORE/

The Seattle store, known as Amazon Go, relies on cameras and sensors to track what shoppers remove from the shelves, and what they put back. Cash registers and checkout lines become superfluous – customers are billed after leaving the store using credit cards on file. (Jeffrey Dastin/Reuters)

Amazon did not discuss if or when it will add more Go locations, and reiterated it has no plans to add the technology to the larger and more complex Whole Foods stores.

The convenience-style store opened to Amazon employees on Dec. 5, 2016 in a test phase. At the time, Amazon said it expected members of the public could begin using the store in early 2017.

But there have been challenges, according to a person familiar with the matter. These included correctly identifying shoppers with similar body types, the person said. When children were brought into the store during the trial, they caused havoc by moving items to incorrect places, the person added.

AMAZON.COM-STORE/

A shopper is seen using his phone in the line-free, Amazon Go store in Seattle, Washington, U.S., January 18, 2018. (Jeffrey Dastin/Reuters)

Gianna Puerini, vice president of Amazon Go, said in an interview that the store worked very well throughout the test phase, thanks to four years of prior legwork.

“This technology didn’t exist,” Puerini said, walking through the Seattle store. “It was really advancing the state of the art of computer vision and machine learning.”

“If you look at these products, you can see they’re super similar,” she said of two near-identical Starbucks drinks next to each other on a shelf. One had light cream and the other had regular, and Amazon’s technology learned to tell them apart.

How it works

The 1800-square-foot (167-square-metre) store is located in an Amazon office building. To start shopping, customers must scan an Amazon Go smartphone app and pass through a gated turnstile.

Ready-to-eat lunch items greet shoppers when they enter.

Deeper into the store, shoppers can find a small selection of grocery items, including meats and meal kits. An Amazon employee checks IDs in the store’s wine and beer section.

AMAZON.COM-STORE/

A customer walks out of the Amazon Go store, without needing to pay at a cash register. If someone passes back through the gates with an item, his or her associated account is charged. (Jeffrey Dastin/Reuters)

Sleek black cameras monitoring from above and weight sensors in the shelves help Amazon determine exactly what people take.

If someone passes back through the gates with an item, his or her associated account is charged. If a shopper puts an item back on the shelf, Amazon removes it from his or her virtual cart.

Much of the store will feel familiar to shoppers, aside from the check-out process. Amazon, famous for dynamic pricing online, has printed price tags just as traditional brick-and-mortar stores do.

Amazon first conceived of the store years ago and applied for a patent on it in early 2015. At the time, many conventional industry players pooh-poohed the idea, saying it would never work and would lead to rampant errors in terms of neglecting to scan an item, or charging a customer for something they didn’t actually intend to buy

With a file from CBC News

https://www.ctvnews.ca/sci-tech/new-research-suggests-winds-of-change-blow-even-for-black-holes-1.3770536

New research suggests winds of change blow even for black holes

LIGO black hole gravitational wavesTwo black holes are shown colliding in this computer-generated simulation from the Laser Interferometer Gravitational-Wave Observatory. (LIGO)

New Canadian-led research has peered into the strange world of black holes to discover they’re girded by electromagnetic winds that not only influence how the super-dense interstellar bodies gobble up anything that gets too close but also how they affect vast areas of space around them.

The insights, published Monday in the journal Science, could ultimately help explain the formation of our own sun, our planetary neighbours and even our own galaxy.

“These winds are telling us two things,” said Greg Sivakoff, a physicist at the University of Alberta and a co-author of the paper.

“They’re changing our perception of how rapidly black holes might grow. The other is it’s going to be affecting the local environment.”

Black holes are the remnants of massive stars that have collapsed in on themselves to an unimaginably small point. That much concentrated mass creates attraction so intense that within a certain distance not even light can escape it — a distance known as the event horizon.

Outside that horizon, black holes are surrounded by giant spinning Frisbees called accretion discs. Those discs contain magnetic instabilities that force them to transfer rotational energy from the outside of the disc to the inside.

As particles fall toward the black hole, they gain energy — just as a figure skater’s spin speeds up as they pull their arms toward them. That energy causes the disc to heat up and emit X-rays.

“We use the X-rays to trace what’s going on,” Sivakoff said.

“We were watching how rapidly they decayed from their peak emission back down to when the disc is more stable. We were surprised to see that the discs were evolving more rapidly than we thought they should.

“The X-ray emissions were dropping too quickly.”

Sivakoff and his colleagues concluded that mass and energy were coming off the disc in some kind of wind.

He said the winds seem to be ongoing throughout the life of a black hole, which suggests that they “feed” much more slowly than we previously thought. Some models indicate that up to 80 per cent of the mass originally thought to be plunging into the black hole is instead blowing off in the wind.

That means black holes are contributing to and changing the interstellar environment, a hot topic among astrophysicists.

“Even though we had this idea of black holes as always taking things away, in this case the black hole may be actually affecting things around it.”

That has implications everywhere.

Accretion discs are common in space and are a feature in the growth of all stars and planets. Studying how black-hole discs work could teach more about the formation of all stars and planets.

And it gets bigger. At the centre of most galaxies lies a massive black hole. The one in our galaxy is about four million times the mass of our sun.

Scientists think the formation of galaxies is linked to the formation of these giant black holes. Sivakoff’s conclusion that black holes do affect their environment opens up new lines of inquiry into how galaxies are made.

“It may be that in some small way, the accretion disc around the black hole at the centre of our galaxy is why we’re here.”

https://singularityhub.com/2018/01/21/machines-teaching-each-other-could-be-the-biggest-exponential-trend-in-ai/

Machines Teaching Each Other Could Be the Biggest Exponential Trend in AI

During an October 2015 press conference announcing the autopilot feature of the Tesla Model S, which allowed the car to drive semi-autonomously, Tesla CEO Elon Musk said each driver would become an “expert trainer” for every Model S. Each car could improve its own autonomous features by learning from its driver, but more significantly, when one Tesla learned from its own driver—that knowledge could then be shared with every other Tesla vehicle.

As Fred Lambert with Electrik reported shortly after, Model S owners noticed how quickly the car’s driverless features were improving. In one example, Teslas were taking incorrect early exits along highways, forcing their owners to manually steer the car along the correct route. After just a few weeks, owners noted the cars were no longer taking premature exits.

“I find it remarkable that it is improving this rapidly,” said one Tesla owner.

Intelligent systems, like those powered by the latest round of machine learning software, aren’t just getting smarter: they’re getting smarter faster. Understanding the rate at which these systems develop can be a particularly challenging part of navigating technological change.

Ray Kurzweil has written extensively on the gaps in human understanding between what he calls the “intuitive linear” view of technological change and the “exponential” rate of change now taking place. Almost two decades after writing the influential essay on what he calls “The Law of Accelerating Returns”—a theory of evolutionary change concerned with the speed at which systems improve over time—connected devices are now sharing knowledge between themselves, escalating the speed at which they improve.

[Learn more about thinking exponentially and the Law of Accelerating Returns.]

“I think that this is perhaps the biggest exponential trend in AI,” said Hod Lipson, professor of mechanical engineering and data science at Columbia University, in a recent interview.

“All of the exponential technology trends have different ‘exponents,’” Lipson added. “But this one is potentially the biggest.”

According to Lipson, what we might call “machine teaching”—when devices communicate gained knowledge to one another—is a radical step up in the speed at which these systems improve.

“Sometimes it is cooperative, for example when one machine learns from another like a hive mind. But sometimes it is adversarial, like in an arms race between two systems playing chess against each other,” he said.

Lipson believes this way of developing AI is a big deal, in part, because it can bypass the need for training data.

“Data is the fuel of machine learning, but even for machines, some data is hard to get—it may be risky, slow, rare, or expensive. In those cases, machines can share experiences or create synthetic experiences for each other to augment or replace data. It turns out that this is not a minor effect, it actually is self-amplifying, and therefore exponential.”

Lipson sees the recent breakthrough from Google’s DeepMind, a project called AlphaGo Zero, as a stunning example of an AI learning without training data. Many are familiar with AlphaGo, the machine learning AI which became the world’s best Go a player after studying a massive training data-set comprised of millions of human Go moves. AlphaGo Zero, however, was able to beat even that Go-playing AI, simply by learning the rules of the game and playing by itself—no training data necessary. Then, just to show off, it beat the world’s best chess playing software after starting from scratch and training for only eight hours.

Now imagine thousands or more AlphaGo Zeroes instantaneously sharing their gained knowledge.

This isn’t just games though. Already, we’re seeing how it will have a major impact on the speed at which businesses can improve the performance of their devices.

One example is GE’s new industrial digital twin technology—a software simulation of a machine that models what is happening with the equipment. Think of it as a machine with its own self-image—which it can also share with technicians.

A steam turbine with a digital twin, for instance, can measure steam temperatures, rotor speeds, cold starts, and other data to predict breakdowns and warn technicians to prevent expensive repairs. The digital twins make these predictions by studying their own performance, but they also rely on models every other steam turbine has developed.

As machines begin to learn from their environments in new and powerful ways, their development is accelerated by communicating what they learn with each other. The collective intelligence of every GE turbine, spread across the planet, can accelerate each individual machine’s predictive ability. Where it may take one driverless car significant time to learn to navigate a particular city—one hundred driverless cars navigating that same city together, all sharing what they learn—can improve their algorithms in far less time.

As other AI-powered devices begin to leverage this shared knowledge transfer, we could see an even faster pace of development. So if you think things are developing quickly today, remember we’re only just getting started.

https://www.dailystar.co.uk/tech/gaming/675390/Samsung-may-have-solved-iPhone-X-s-BIGGEST-problem

Samsung may have solved one issue with the iPhone X

A NEW patent shows that Samsung may be working on a design solution to Apple’s iPhone X screen notch.

Samsung may have solved iPhone X's BIGGEST problemDS

FIXED: Samsung may have solved one of the iPhone X’s niggles

Samsung may have found a solution to the design issue that’s been bugging some Apple fans since the launch of the iPhone X.

This next generation device turned heads when it was announced becuase it managed to feature a nearly bezel-free design, but this came at the expense of a ‘notch’ that breaks up the top of the screen.

It’s not a huge problem, but some say that the ‘notch’ consistenly interupts up the image and detracts from the clarity of the screen.

Thanks to new patents from Samsung (originally spotted by LetsGoDigital), the ‘notch’ problem could be addressed by using a flexible OLED screen with a series of holes allowing the camera, speakers and various other sensors to operate.

The images further down the page show off how the full screen would work with this new screen concept.

According to the patent, the phone would even allow text to wrap naturally around the holes – the phone would automatically detect the placement of text and break up the text as nessesary, so no important information is lost.

Samsung seems to be aware that users are keen to cater the phone display to suit their own needs, and to that end there seem to be plans to allow users the choice of displaying full-screen around the dots, or by shrinking the display to omit the cutouts altogether.

The patents also suggest Samsung is still keen to keep its home button – it hasn’t done away with that like the iPhone X has managed to do. It is suggested that the button will perhaps by on-screen, though.

New chipset leaks suggest Samsung is working on features to rival Apple’s Animoji and Face ID systems, too, though the company seems to be able to implement these in its design with a smaller notch than the iPhone X.

 

Samsung's newest patent, spotted by LetsGoDigitalSAMSUNG

Samsung’s newest patent, spotted by LetsGoDigital

It’s been a big month for Samsung with a huge array of devices shown off at CES earlier this month, but predominantly for everything but their smartphone division.

Despite this, there was still a slim but very important piece of news regarding their Galaxy S smartphone range and when they plan to unveil the new phone.

According to DJ Koh, president of the technology company’s mobile division, the Samsung Galaxy S9 and Galaxy S9+ will launch at the Mobile World Congress (MWC) tradeshow running from February 26th to March 1st 2018.

This new flagship handset for the company is widely believed to be an iterative update to the Galaxy S8 and Galaxy S8+ rather than a wholesale redesign.

That’s certainly not a bad thing though, because both the Galaxy S8 and Galaxy S8+ received numerous awards in the past year, despite the rocky start caused by the battery problems that struck the Galaxy Note 7.

As mentioned, there’s been plenty of leaks which have – bit by bit – given mobile fans some small slices of info about how the phone is coming together.

Each rumour on its own won’t help all that much, but piece them all together and you’ll start to get a very clear indication of where Samsung is headed with their new flagship smartphone.

So to help, we’ve rounded up everything we think we know so far about the Samsung Galaxy S9.

Samsung mocks Apple’s iPhone in new Galaxy advert

There’s been numerous reports online which all suggest that the S9 will have a much larger battery capacity, especially compared to its predecessor – the Samsung Galaxy S8.

According to a source in China, Samsung is looking to increase the battery capacity from 3,000mAh in the Galaxy S8 up to 3,200mAh in the Galaxy S9.

This would match up with past rumours from SamMobile, which previously suggested the Galaxy S9 will be thicker than its predecessor.

http://www.ibtimes.co.uk/this-magical-e-skin-lets-you-move-virtual-objects-wave-your-hand-1656024

This magical ‘e-skin’ lets you move virtual objects with a wave of your hand

The tech uses a magnetic field to type on a virtual keyboard or reduce the brightness of a virtual bulb.

Controlling virtual light bulbs without touching them
Using ‘e-skin’ to control virtual light bulbs without actually touching anything D Makarov

Scientists have developed a crucial link between physical and virtual reality (VR) – a magical ‘e-skin’ that lets you manipulate objects in the virtual world without touching or seeing them.

At first glance, the skin – which is thinner than a strand of human hair – looks like a simple tattoo, but there is obviously a lot going on in there, and is almost like a typical high-end gadget in a sci-fi movie.

The tiny device carries magnetic sensors that interact with a permanent magnet to detect physical motion and these sensors transmit that information to a connected software.

Working with every angle of motion, the software manipulates an object in virtual reality. This means that adjusting a light dimmer or typing on a virtual keyboard can be done with just a single wave of your hand.

“Our electronic skin traces the movement of a hand, for example, by changing its position with respect to the external magnetic field of a permanent magnet,” explains Gilbert Cañón Bermúdez, the lead author of the study. “This not only means that we can digitise its rotations and translate them to the virtual world but also even influence objects there.”

The technology is still at a nascent stage but the researchers have already showcased its potential in a couple of videos. The first clip shows how the device can be used to reduce the brightness of a virtual bulb, while the second shows its ability to serve as a wearable keyboard for dialling numbers on a virtual keyboard.

“By coding the angles between 0 and 180 degrees so that they corresponded to a typical hand movement when adjusting a lamp, we created a dimmer – and controlled it just with a hand movement over the permanent magnet,” Denys Makarov, the study’s co-author, explained while describing the first experiment.

As of now, VR devices only allow a person to manipulate objects in virtual reality. However, all these technologies are bulky and depend on a direct line of sight between the object and the person controlling it. With the novel e-skin, though, all of that changes.

The extraordinary tech is still being worked on but the researchers envision a variety of ways in which it can be used – one of which would be remote access to a control button or panel in a room which cannot be entered due to any kind of hazard. The group also hopes to replace the magnetic field of the permanent magnet with the Earth’s geomagnetic field sometime in the future.

The work on this ‘skin’ has been detailed in a study published in the journal Science Advances.