The Singularity Race

The Singularity can be the ultimate means to power. Who gets there first gets the world, maybe even the Universe, increasing their lead before the others exponentially. This attracts different entities into a “winner takes all” race – the Singularity race.

The civilization progress, if viewed as having levels for the analysis purpose, builds each of its levels upon the previous one. Thus, it related to a von Neumann machine: both are von Neumann-type processes. All such processes can be, and usually are constrained from being exponential for too long by some resource – either of its exhaustion, or of reaching the limit of its supply per an unit of time.

The progress is not an exception to this rule. In a world where competition is strong and the progress level is a key asset, the constraints that will be met first are the speed and the capacity of the engines that move the progress – the human brains. All other currently known constraints are much more distant, and will probably be easy to circumvent with the means of the progress level at the moment they are reached.

The brainpower constraints can be pushed somewhat further by better organization of the scientific process, social and economical adjustments etc. However, this degree appears insufficient to achieve the Singularity within reasonable time and investments – that needs a sufficiently developed artificial intelligence (AI). Therefore, the key to winning the Singularity race is the earliest development of AI that is able to increase its own abilities faster than human brains can do. So, the Singularity race is effectively between the entities who have the best base for developing AI, and invest the most in it.

The contenders

Currently (end-2015) the most serious contenders are some big American IT companies and some intelligence agencies. There are also several types of minor contenders.

The companies

These currently include Google, Apple, Microsoft, Yahoo, Facebook and IBM. Other company-type players might participate too silently to be noticed, or might join the race at a later stage.

Google

Opened the Singularity race, and currently appear to lead it. They have substantial finances, powerful IT infrastructure, a lot of talent, the best Singularity visionaries (both among the personnel and the owners) and a bind to the early adopters. Develop a lot of different AI-containing products, which together cover a wide range of AI aspects. Have amassed the probably biggest cache of information, including quality knowledge and human personal data, in the IT industry – and this is a key super-AI asset. If marketing mistakes like Google Glass are avoided, they have good chances to win the race.

What might hinder Google is neglect for human values. The quiet abandoning of the principle “Don’t be evil” hasn’t gone unnoticed by the supporters, and they are who gives Google the best talents. (And an important market leverage, especially on emerging markets – and the future income of Google will depend on these.) Compared to the power of a big company, the difference caused by the grassroots support usually appears negligible. However, this evaluation is often misleading, and in a close race even a small difference might prove crucial.

Apple

Despite the late start, they appear to catch up quickly. Have far more financial resources than any other big IT company, maybe even than all of them together. Have also a huge cache of human personal data and well-chosen quality knowledge. Given that the strategy of simplifying the users interaction with gadgets naturally puts them on the AI path, Apple might become the leader within a decade. Also, they lately show taste for acquiring companies developing key AI technologies, including outside of its current interest areas, which shows increasing Singularity awareness and tendency to develop in its direction.

On the detrimental side, Apple appears to mostly lack Singularity visionaries among both personnel and owners. Also, their ownership is more distributed and diverse than that of Google, so they might find hard to redirect quickly big part of its finances into this. Additionally, their strategy binds them to the fashion followers – those who bind to the early adopters are potentially better positioned in the race, and the difference will increase with the speed of the progress.

Microsoft

They know well the value of the big data on every user and are rumored to create specialized tools in that area for intelligence agencies and biggest corporations. That puts them on the road of creating powerful intelligence aids, giving key advantage to assisted human minds. And they will not be averse to using these to their own advantage. The default setup of Windows 10, and very probably of all further Windows versions is to keep most confidential user data on Microsoft servers – this gives them data hold comparable to that of Google and Apple. The public statements of some key people in the company creates the impression that they have realized the importance of the powerful AI, and probably that of the Singularity. They also have probably the biggest software programming potential among all entities. Finally, if anyone in the IT world knows how to be close with the big players and to use their influence to its advantage, that is Microsoft.

The sale of Bing might hamper their access to keeping locally the info wealth of the Net. (But rumors say that they kept the ability to mirror a good selection of all quality knowledge there, and maybe the access to the Bing storage.) Their corporate culture is oriented to profits chase and smaller-scale power plays – this might keep them away from massive Singularity-targeting investing into AI and the internal cooperation needed to successfully engage such a huge task. Some of the early adopters and some IT personnel also tend to mistrust them, and this might silently hinder their efforts.

Yahoo

Currently they lag behind, but have good info search / access infrastructure and technologies. Add an ambitious boss with experience from other Singularity contenders, and some good financing might position them well in the race. If they go to sharing know-how and effort with NSA and/or the Pentagon, they might get that financing and some key technologies.

On the minus side, currently (end-2015) their chances are mostly hypothetical. The publicly available info does not show an AI development of a scale anywhere near that of the other company-type contenders. Also, they are under strong pressure from their owners to improve the short and mid-term profits, which can limit the investment into a long-term projects like the AI / Singularity race. Even if they succeed to turn to profit again, they might be too late for the race.

Facebook

When it comes to the progress, the humankind can be viewed as a single intellect, and the links between the people are not less important than these between the neurons in a brain. Facebook has more info about the inter-human links than anyone else, and thus is the best positioned to solve some key tasks in the development of a powerful AI. Their connections, influence and money can buy them most, if not all of the remaining key pieces of the puzzle. So it is possible that they will appear to lag behind the race leaders for a long time, but can suddenly spurt ahead on the stage of powerful AI development. They also show some of the marks of silent involvement into mass-scale information collecting – for example, their project to supply to big parts of the world free Internet, which will pass through channels controlled by them.

Few things can hold Facebook back. The apparent lack of Singularity / AI vision, even if real, will not last for long. None can replace them at the social networks throne. Still, if they manage to underestimate enough the importance of strong AI development, the race might pull too far ahead of them to catch up.

IBM

Their research on prospective hardware gives unique advantage over the mostly software-oriented contenders. Also, they have good software engineers and a leading role in the big servers market. Thus, they are an important wild card to watch. Powerful AI needs powerful hardware to run on, and they are the company for it. Having big experience with using IP rights, they are also more than able to successfully hamper the competition. Finally, does the word “Watson” ring a bell?

Not having some key AI prerequisites themselves (eg. big data cache) appears to relegate them to the role of a kingmaker. However, kingmakers often can easily put on the throne themselves, and Microsoft taught this lesson to none other than IBM. Also, they have invested for two decades in some key AI technologies, and have the best achievements there. So it would be prudent to assume that IBM is one of the most serious contenders.

The security agencies
These currently include the security agencies of China and Russia, NSA, the Pentagon, and the agencies of Great Britain and probably Germany and France, separately or in a joint initiative. Other agencies might be also in the game, or to join it at a later stage.

China

It is smart enough to realize the powerful AI importance, and is able to concentrate into its development money and human numbers that dwarf anyone else – maybe everyone else together. Its technological prowess in the supercomputers area is indisputable. Add to that its huge military-based spying network, and it might be the most powerful contender of all.

The hurdles at this race are the typical ones for an authoritarian / totalitarian country. It is not hospitable to disruptive ventures and personal initiative, and these are key parts of the technological advancement. (However, military goals are an exception to a big degree.) Also, it is close to exhausting the possibilities for extensive growth and having to switch to intensive one instead, and that is often a problem for authoritarian countries. (Its “communist capitalism” semi-feudal system might solve this to a degree, but is still far from the productivity a truly free system can unfold with the time.) Not solving this problem satisfactorily might limit the finances and the human potential China can invest in the Singularity race. (But a period of economical struggles might turn it to the idea for the Singularity as a solution for both economical and political problems.)

Russia

It is also not averse to spying and never saved money on military goals. Has a lot of talent and its culture is welcoming to technological advancement. (Especially in the military area, where they keep abreast with the US in a surprising number of areas, despite the much smaller investments.) Add to this a very strong and strategically minded ruler, and they might be also a top contender.

In addition to the typical authoritarian country hurdles, Russia might become cash-strapped in the near future. Shales are abundant across the world and the shale oil / natural gas technologies are relatively cheap and easy to master. So the hydrocarbons price probably will not stabilize above $60 per oil barrel for at least a decade, and Russia exports mostly oil and natural gas and imports a lot of the goods it consumes – its future income might be severely limited. Still, much like China, the power there might see the powerful AI and the Singularity as the means to everything they might dream of.

NSA and the Pentagon

If Russia and China are in a “winner takes all” race, NSA and the Pentagon are bound to be in, too. Their financing might be relatively smaller, but they have access to the top technologies, a lot of the top talent and the top e-spying in existence. Their start might be later, but they are able to quickly catch up on need. And they might be able to convince the company-type contenders, all of these being American, to cooperate.

The downside is that if they create a powerful AI, it will be classified for a long time, and directed exclusively to military and intelligence goals during this period. (In some scenarios, all American AI achievements might be classified too, and take the same path.) This will effectively delay the advent on the Singularity, maybe for decades. Thus it will increase the risk that malicious and/or clandestine contenders might spy out, steal or develop first Singularity-capable AI technologies.

Great Britain

It will probably try to stick with the American efforts, building upon “Five Eyes” and other joint security efforts. However, if that turns impossible or unsatisfactory, it might decide to go alone. Has an excellent tradition in comparable projects and, through the British Commonwealth, access to plenty of talent. And its economics appears generally more resilient to crises than the average – that is, the financing of such an initiative might be smaller, but will be more reliable.

On the minus side, its resources are generally smaller than those of most state / intelligence agency players. Also, this will almost surely be a military / intelligence project, and will be classified and directed to the corresponding goals, with the effects described above.

France and Germany

Both are quickly becoming powerful AI and Singularity smart. Both are pissed by the American big IT companies and intelligence. Both will hate to see a powerful AI- or Singularity-enabled Russia or China. Germany has tons of money, organizational experience and talent, France – more talent and security experience. So a classified joint European Singularity-related AI venture is possible, if not even likely. Even if they decide to go separately, they will probably join efforts at a later point.

The downsides are similar to these of the described NSA, Pentagon or Great Britain projects.

Other contenders

The biggest retail companies

Most retail companies invest heavily in data mining and statistics, and have done it for more than a decade. These areas are closely related to some key AI technologies, and are often numbered among them. Verbose information on a person’s purchases has proved to supply a lot of insight into that person; such an info on many people gives a lot of info about the society. If augmented with a big cache of other personal data, it can be used to build powerful market research tools. The biggest retailers can easily buy such a cache, and often do.

The further development of market research tools naturally turns them into intellect-augmenting (IA) tools, which can substantially increase the profits. This tendency can easily cross the border with the assisted brain type AI development. A stronger Singularity awareness might drive it further, into the powerful AI development. (The retailers owners might be reluctant, but managers of big corporations have a lot of executive privilege. Approaching the Singularity and the creation of powerful AI increases the awareness of what these might bring, so the likelihood of such a development increases with the time. Also, the retailers’ financial power and strength of interest into increasing profits can create very lucrative market for powerful AI tools, able to finance their quick development.)

The data collection and resale companies

As mentioned above, big caches of personal data are a good base for developing powerful IA tools, and the personal data market constantly demands such tools. So, the data collection and resale companies try to develop and/or order them from IT companies. This market is no-nonsense one: even a small power advantage is quickly noticed and attracts the buyers. Such a relentless pressure for more power can elevate the products into the area of assisted brain type AI development, and then further into the powerful AI. This market, though usually hidden from the public view, contains enough money to finance that.

And it might be a dangerous development. The collection and trade of personal data is unethical and often not completely legal. This makes this market attractive to entities willing to violate the ethics and the law, and experienced in this. Such entities might not be the best custodians for a Singularity-level AI power.

The social networks companies

These companies hold caches of personal data, rivaled only by the biggest Internet search companies. (Most of the latter, being among the “big” AI contenders, are discussed above.) It is inevitable that they will be drawn into the personal data market, dealing and/or competing with the data collection and resale companies, and will eventually take the same road. And most of them have enough in-house tech skill to try and develop AI.

The problem here is that the social networks user “market” is very strongly aggregating one. Facebook has cornered most of it; the other players are niche ones, and don’t have nearly the same amount of personal data. That is, their base for developing powerful AI is narrower – using it for that will require bigger investments than a Facebook or a Google will need to do the same. Still however, any of the bigger among these companies might provide another race contender with key data and technical expertise in the personal networking area, and thus to unexpectedly strengthen that contender’s position.

http://www.kurzweilai.net/forums/topic/the-singularity-race

MOTOBOT: the first autonomous motorcycle-riding humanoid robot

Cooler than Terminator and Robocop
October 29, 2015

MOTOBOT Ver. 1 (credit: Yamaha)

Yamaha introduced MOTOBOT Ver.1, the first autonomous motorcycle-riding humanoid robot, at the Tokyo Motor Show Wednesday (Oct. 28). A fusion of Yamaha’s motorcycle (an unmodified Yamaha YZF-R1M) and robotics technology, the future Motobot robot will ride an unmodified motorcycle on a racetrack at more than 200 km/h (124 mph), Yamaha says.

“We want to apply the fundamental technology and know-how gained in the process of this challenge to the creation of advanced rider safety and rider-support systems and put them to use in our current businesses, as well as using them to pioneer new lines of business,” says Yamaha in its press release.

Yamaha | New Yamaha MotoBot Concept Ver. 1

related:
The 44th Tokyo Motor Show 2015 – About the Yamaha Booth

http://www.kurzweilai.net/motobot-the-first-autonomous-motorcycle-riding-humanoid-robot

This robot will out-walk and out-run you one day

Human-like “spring-mass” design may lead to walking-running robot soldiers, fire fighters, factory workers, and home servants of the near future.
October 29, 2015
[+]

A walk in the park. Oregon State University engineers have successfully field-tested their walking robot, ATRIAS. (credit: Oregon State University)

Imagine robots that can walk and run like humans — or better than humans. Engineers at Oregon State University (OSU) and Technische Universitat Munchen may have achieved a major step in that direction with their “spring-mass” implementation of human and animal walking dynamics, allowing robots to maintain balance and efficiency of motion in difficult environments.

Studies done with OSU’s ATRIAS robot model, which incorporates the spring-mass theory, show that it’s three times more energy-efficient than any other human-sized bipedal robots.

“I’m confident that this is the future of legged robotic locomotion,” said Jonathan Hurst, an OSU professor of mechanical engineering and director of the Dynamic Robotics Laboratory in the OSU College of Engineering. “We’ve basically demonstrated the fundamental science of how humans walk,” he said.

When further refined and perfected, walking and running robots may work in the armed forces, as fire fighters, in factories or doing ordinary household chores, he said. “This could become as big as the automotive industry,” Hurst added.

Wearable robots and prostheses too

Aspects of the locomotion technology may also assist people with disabilities, said Daniel Renjewski with the Technische Universitat Munchen, the lead author on the study published in IEEE Transactions on Robotics. “Robots are already used for gait training, and we see the first commercial exoskeletons on the market,” he said. “This enables us to build an entirely new class of wearable robots and prostheses that could allow the user to regain a natural walking gait.”

Topology and key technical features of the ATRIAS robot. ATRIAS has six electric motors powered by a lithium polymer battery. It can take impacts and retain its balance and walk over rough and bumpy terrain. Power electronics, batteries, and control computer are located inside the trunk. (credit: Daniel Renjewski et al./IEEE Transactions on Robotics)

In continued research, work will be done to improve steering, efficiency, leg configuration, inertial actuation, robust operation, external sensing, transmissions and actuators, and other technologies.

The work has been supported by the National Science Foundation, the Defense Advanced Research Projects Agency, and the Human Frontier Science Program.

Oregon State University | ATRIAS Bipedal Robot: Takes a Walk in the Park

Abstract of Exciting Engineered Passive Dynamics in a Bipedal Robot

A common approach in designing legged robots is to build fully actuated machines and control the machine dynamics entirely in software, carefully avoiding impacts and expending a lot of energy. However, these machines are outperformed by their human and animal counterparts. Animals achieve their impressive agility, efficiency, and robustness through a close integration of passive dynamics, implemented through mechanical components, and neural control. Robots can benefit from this same integrated approach, but a strong theoretical framework is required to design the passive dynamics of a machine and exploit them for control. For this framework, we use a bipedal spring-mass model, which has been shown to approximate the dynamics of human locomotion. This paper reports the first implementation of spring-mass walking on a bipedal robot. We present the use of template dynamics as a control objective exploiting the engineered passive spring-mass dynamics of the ATRIAS robot. The results highlight the benefits of combining passive dynamics with dynamics-based control and open up a library of spring-mass model-based control strategies for dynamic gait control of robots.

references:
Daniel Renjewski, Alexander Sprowitz, Andrew Peekema, Mikhail Jones, Jonathan Hurst. Exciting Engineered Passive Dynamics in a Bipedal Robot. IEEE Transactions on Robotics, 2015; 31 (5): 1244 DOI: 10.1109/TRO.2015.2473456
related:
“Spring-mass” technology heralds the future of walking robots
Topics: AI/Robotics
share | e-mail print
Comments (7)

October 30, 2015
by Windchill
WHy do they never give these bipedal robots proper feet? I find that my feet are very well designed and very useful for stopping me from falling over.

Log in to Reply
October 30, 2015
by Bradley Steeg
You’re right, something like this will out-walk and out-run me one day. Can’t wait.

Log in to Reply
October 30, 2015
by melajara
The next step(!) after Boston Dynamics Big Dog.
There are still two crucial components missing in this model, a foot and an ankle linked to the leg. Without those, the robot is forced to resort to a noisy and not very energy efficient jumpy gait.

Log in to Reply
October 30, 2015
by Gorden Russell
That’s just what I was thinking. Of course I want a robot for “doing ordinary household chores…” but if it is going to be springing across my linoleum, I want it to replace the scratched tiles.

And when I’m walking my dog, I want a robot to come along to pick up after him when he poops…and a springing robot will be tossing the poops up out of the pooper scooper.

Log in to Reply
October 29, 2015
by richardrichard
Please do not address people directly like in cheap advertisements using scaring or challenging tactics: “This robot will out-walk and out-run you one day”

This is supposed to address interested and smart people, and not attract some random visitor to generate traffic. There is already enough low quality content on the web like this. Hyperbole, suggestiveness, survival of the fittest mindset, and similar writing style do not add to credibility or likability and will change the audience of people who read those news.

I read the news here in the past because I liked the neutral and distant point of view.
Please come back to this.

http://www.kurzweilai.net/this-robot-will-out-walk-and-out-run-you-one-day

Sleep disruptions similar to jet lag linked to memory and learning problems

Add good sleep habits to regular exercise and a healthy diet to maximize good memory, scientists advise
October 29, 2015

(credit: iStock)

Chemical changes in brain cells caused by disturbances in the body’s day-night cycle may lead to the learning and memory loss associated with Alzheimer’s disease, according to a University of California, Irvine (UCI) study.

People with Alzheimer’s often have problems with sleeping or may experience changes in their slumber schedule. Scientists do not completely understand why these disturbances occur.

“The issue is whether poor sleep accelerates the development of Alzheimer’s disease or vice versa,” said UCI biomedical engineering professor Gregory Brewer, affiliated with UCI’s Institute for Memory Impairments and Neurological Disorders. “It’s a chicken-or-egg dilemma, but our research points to disruption of sleep as the accelerator of memory loss.”

Inducing jet lag in mice causes low glutathione levels

To examine the link between learning and memory and circadian disturbances, his team altered normal light-dark patterns, with an eight-hour shortening of the dark period every three days for two groups of mice: young mouse models of Alzheimer’s disease (mice genetically modified to have AD symptoms) and normal mice.

The resulting jet lag greatly reduced activity in both sets of mice. The researchers found that in water maze tests, the AD mouse models had significant learning impairments that were absent in the AD mouse models not exposed to light-dark variations or in normal mice with jet lag. However, memory three days after training was impaired in both types of mice.

In follow-up tissue studies, they saw that jet lag caused a decrease in glutathione levels in the brain cells of all the mice. But these levels were much lower in the AD mouse models and corresponded to poor performance in the water maze tests. Glutathione is a major antioxidant that helps prevent damage to essential cellular components.

Glutathione deficiencies produce redox changes in brain cells. Redox reactions involve the transfer of electrons, which leads to alterations in the oxidation state of atoms and may affect brain metabolism and inflammation.

Brewer pointed to the accelerated oxidative stress as a vital component in Alzheimer’s-related learning and memory loss and noted that potential drug treatments could target these changes in redox reactions.

“This study suggests that clinicians and caregivers should add good sleep habits to regular exercise and a healthy diet to maximize good memory,” he said.

Study results appear online in the Journal of Alzheimer’s Disease.

AD has emerged as a global public health issue, currently estimated to affect 4.4% of persons 65 years old and 22% of those aged 90 and older, with an estimated 5.4 million Americans affected, according to the paper.

Abstract of Circadian Disruption Reveals a Correlation of an Oxidative GSH/GSSG Redox Shift with Learning and Impaired Memory in an Alzheimer’s Disease Mouse Model

It is unclear whether pre-symptomatic Alzheimer’s disease (AD) causes circadian disruption or whether circadian disruption accelerates AD pathogenesis. In order to examine the sensitivity of learning and memory to circadian disruption, we altered normal lighting phases by an 8 h shortening of the dark period every 3 days (jet lag) in the APPSwDI NOS2–/– model of AD (AD-Tg) at a young age (4-5 months), when memory is not yet affected compared to non-transgenic (non-Tg) mice. Analysis of activity in 12-12 h lighting or constant darkness showed only minor differences between AD-Tg and non-Tg mice. Jet lag greatly reduced activity in both genotypes during the normal dark time. Learning on the Morris water maze was significantly impaired only in the AD-Tg mice exposed to jet lag. However, memory 3 days after training was impaired in both genotypes. Jet lag caused a decrease of glutathione (GSH) levels that tended to be more pronounced in AD-Tg than in non-Tg brains and an associated increase in NADH levels in both genotypes. Lower brain GSH levels after jet lag correlated with poor performance on the maze. These data indicate that the combination of the environmental stress of circadian disruption together with latent stress of the mutant amyloid and NOS2 knockout contributes to cognitive deficits that correlate with lower GSH levels.

references:
LeVault, Kelsey, Tischkau, Shelley, Brewer, Gregory. Circadian Disruption Reveals a Correlation of an Oxidative GSH/GSSG Redox Shift with Learning and Impaired Memory in an Alzheimer’s Disease Mouse Model. Journal of Alzheimer’s Disease, vol. Preprint, no. Preprint, pp. 1-16, 2015; DOI: 10.3233/JAD-150026
related:
UCI study finds jet lag-like sleep disruptions spur Alzheimer’s memory, learning loss
Topics: Cognitive Science/Neuroscience
share | e-mail print
Comments (1)

October 30, 2015
by almostvoid
I’ve flown a few long haul flights and shorter in between [10hrs vs 22+] and NEVER had jetlag. I don’t get how ppl given even the above suffer. i mean how hard is it to sit down? do nothing? read a book? watch the screen. I mean there is no bus-lag for long distances which I have done [Tehran to Istanbul] or train-lag [Australian trains are slow: Sydney – Brisbane 23 hrs] and no train lag. Something is not right in the science of jet-lag. A self perpetuating myth.

http://www.kurzweilai.net/sleep-disruptions-similar-to-jet-lag-linked-to-memory-and-learning-problems

Apple dips a toe into VR with U2 music video

Apple just stepped into the virtual reality game, kinda, via a(nother) partnership with U2. Apple Music and the Irish band collaborated on a special 360-degree video for Song for Someone produced by Vrse — a platform for original VR content. This might be the first consumer VR content with Apple’s name to it, but it’s familiar ground in many ways. The partnership between U2 and Apple is part of a promotional “experience” which includes a bus outside venues of its European Tour, where fans of U2 can view the video via Oculus rift. Apple Music, on the other hand, is clearly trying new ways to win over some of those (many) non-paying users currently testing out its streaming service.

This isn’t the first time the iPhone-maker has teamed with the quartet. Famously, Tim Cook joined Bono on stage to give the group’s latest album free to all iTunes users. A gift that backfired, with many people angered by the (some felt) aggressive promotion. Then, of course, there was that iconic iPod advert. The Song for Someone video puts the viewer in the center of the band as it plays to an empty arena (as show in the Instagram photo below). U2’s performance is blended with guest videos of the same song recorded by fans. This is hardly the first such video, but joins a growing list of artists using the 360 video format to get their music (literally) in front of more eyes.

This time, at least, the product isn’t being foisted on anyone. Willing fans can hop on an experience bus which will be at various venues for the remainder of the European Tour. Those who just want to check out the video can dive in for free via Vrse on iPhone, Android or Samsung Gear. Does this mean Apple’s looking at VR more seriously? Has U2 finally found what it’s looking for? Anything’s possible. But, if Cupertino’s measured approach with other new technologies is anything to go by, don’t expect anything beyond video content for now.

http://www.engadget.com/2015/10/29/apple-vr-u2-video/

Apple Rejects ‘Gravity’ App That Turns Your iPhone 6s Into a Real Digital Scale

Developer Ryan McLeod and his friends have managed to turn the iPhone 6s and iPhone 6s Plus into a real-life digital scale. Thanks to 3D touch, the ‘Gravity’ app would be able to weigh items up to 385 grams (about .85 lbs) within 1-3 grams of accuracy. Over the weekend, we also saw a demonstration of 3D touch weighing plums, but that demo only showed which item was heavier.

To accomplish this creative use of 3D touch, Ryan and his team first had to find a way to calibrate the app’s scale. Apple’s 3D touch API measure weight on a custom scale of 0.00 to “maximum possible force,” with 1.00 being an average touch. To calibrate the app, the team needed to find a common household item that was conductive and had finger-like capacitance. It turns out a spoon was the perfect solution, since it was able to hold and balance the objects as well.

Apple Rejects ‘Gravity’ App That Turns Your iPhone 6s Into a Real Digital Scale [Video]

Unfortunately, Apple rejected Gravity for having a “misleading description.”

Gravity unfortunately got rejected for having a misleading description and we immediately knew why: There are a couple dozen “scale” apps on the app store. The thing is that 80% of them are joke apps, “for entertainment purposes only” and the other 20% try to weigh things using the tilt of your iPhone once it’s been balanced on top of an inflated bag and calibrated using a single coin. Gravity was most likely confused with the prank apps and rejected for claiming it was a real working scale.

Ryan made a demo video of the application and submitted an appeal to the Apple App Store review team, but unfortunately Apple did not approve the app. Apple’s reasoning for the rejection was that a concept of a scale app was not appropriate for the App Store.

As Ryan notes in his blog post, there could be several reasons behind Apple’s rejection. People could damage iPhone 6s screens by weighing items that are too heavy, although the Gravity app did flash a red warning indicator when an object exceeded 385 grams. Additionally, Apple could see Gravity’s odd use of the 3D Touch API as misuse of the API.

Check out the video below of the Gravity app in action and let us know what you think in the comments.

Read More via TheVerge

http://www.iclarified.com/52264/apple-rejects-gravity-app-that-turns-your-iphone-6s-into-a-real-digital-scale-video

Apple TV: What you need to know about Apple’s latest TV streaming box

It turns out that Apple’s streaming-TV box — aptly named Apple TV — isn’t just for streaming anymore. Its latest incarnation, which ships this week, offers on the big screen just about anything you could previously only do on an iPhone or iPad.

Cord-cutting grows as more people flee traditional TV, report says
Whether that’s good may depend on whether you really want to buy shoes, browse home listings or read comic books on your TV. The new Apple TV looks to be a capable device for those purposes, although it’s not flawless. Its streaming-TV features also trump those of its predecessor.

In Canada, the new Apple TV will set you back $199, or $269 for a version with extra storage. Apple will still sell the old version for $89. Neither requires an iPhone or iPad, although either iDevice can simplify the Apple TV setup process.

The basics

Apple TV has been a dependable streamer, but until now its repertoire was limited to a few dozen services. Sure, these included Netflix, Hulu and HBO. But Apple didn’t let you add other channels — say, competitive videogame play from Twitch.tv — on your own.

That’s all changed. The new Apple TV features an iPhone-like app store that lets you choose your own streaming services. And it’s no longer pushy about steering you to iTunes and other Apple services. You can easily customize the home screen with your favourites.

Apple TV app store
For the newly overhauled Apple TV, Apple is ditching its curated approach and will offer an app store similar to the iPhone’s. (The Associated Press)

Video quality on the new Apple TV maxes out at full high definition, known technically as 1080p. That should be plenty for most people. Video enthusiasts may complain that it doesn’t support a higher-quality video standard called ultra-high definition or 4K, as several other streaming boxes do. But there aren’t many 4K TVs or much programming for them available yet.

5 TV streaming gadgets that will let you cut the cord
Cutting the cord? Call the anti-cable guy
The Apple TV remote doesn’t have a headphone jack, which other streaming devices like the Roku 3 and 4 and the Nvidia Shield offer to spare your family and roommates late at night. Instead, Apple TV supports Bluetooth wireless headphones. Although you need to buy those separately, I prefer them because it can be tricky doing chores with a remote dangling from your headphone cords.

It’s not yet clear whether you’ll be able to stream video from Amazon and Google Play. Both companies have competing video stores, and one sticking point could be the cut Apple takes on in-app digital sales. Other major services, including Google’s YouTube, are expected on the Apple TV.

Amazon bans sales of Google Chromecast, Apple TV
Innovations

The new Apple TV enables voice searches using the Siri virtual assistant. Request Seinfeld or Jennifer Lawrence, and Apple TV will look through catalogs for iTunes, Hulu, Netflix, HBO and Showtime, with more to come. You can even ask for “good documentaries to watch.”

Digital Life Apple TV Review
This screen shot made Tuesday, Oct. 27, 2015 shows search results when using the new Apple TV’s Siri virtual assistant to find an specific episode of the sitcom “How I Met Your Mother,” on Netflix. (The Associated Press)

Although similar capabilities are available on other devices, Apple TV goes further in a few ways:

The remote replaces traditional rewind and forward buttons with a laptop-style trackpad. By sliding left and right, you control playback and navigate the on-screen keyboard more quickly. Sliding down gets you settings and show details, when available. The remote also lets you control the TV’s power and volume directly, something I’ve seen only with TiVo video players.
You can control playback by asking Siri to rewind 45 seconds or jump ahead five minutes, though some services won’t let you forward past commercials. Saying “What did she say?” will rewind video 15 seconds and briefly turn on closed captioning, when available. It works fully with iTunes for now, but the closed-captioning part doesn’t work with all third-party services yet.
You can ask Siri for a specific episode, such as the How I Met Your Mother episode with Katie Holmes. Guest stars tend to trip up rival devices.
Apple TV and remote
The remote replaces traditional rewind and forward buttons with a laptop-style trackpad. By sliding left and right, you control playback and navigate the on-screen keyboard more quickly. Sliding down gets you settings and show details, when available. (Apple/Associated Press)

Beyond streaming

Siri offers weather, stocks and sports information. It was great for tracking Tuesday’s World Series opener without watching the game. This feature isn’t unique to Apple TV, but unlike the competition, Apple TV feeds you info without interrupting your video by sliding up results from the bottom of the screen.

Digital Life Apple TV Review
This screen shot made Wednesday, Oct. 28, 2015 shows the final score of game 1 of baseball’s World Series between the New York Mets and the Kansas City Royals. Users of the new Apple TV can ask Siri for weather, stocks and sports information. (The Associated Press)

I had to rephrase or repeat my questions a few times, especially if I was speaking quickly. As long as I enunciated clearly, results were mostly satisfactory. Apple TV’s version of Siri, however, won’t handle general web searches.

Apple TV catches up with rivals in enabling games. The remote has sensors that let you navigate spaceships and swing baseball bats by moving it around. But a bigger potential lies in bringing other apps to the big screen.

Digital Life Apple TV Review
Jon Carter, with Harmonix, shows off the company’s Beat Sports game for the new Apple TV at an Apple event. The device which catches up with rivals in enabling games. The remote has sensors that let you navigate spaceships and swing baseball bats by moving it around. (Eric Risberg/Associated Press)

You can browse home to buy through Zillow and places to stay on vacation through Airbnb. Images on the big TV gave me a better sense of these properties than phone browsing would. You can also shop through Gilt and QVC.

Room to grow

Apple still needs to persuade developers to make more apps that really exploit the larger, and often shared, TV screen. Many of the apps now available are limited to one user profile or account, making them difficult for others to use.

It would also be nice for Apple TV to work better with payment services. You can easily buy videos and games with your iTunes account, but non-digital products are another story. Airbnb, for instance, will let you “favorite” places to stay, but you’ll need a phone or computer to book a room. It’s not exactly the relaxed, couch-potato experience you expect from TV.

Generally speaking, though, the new Apple TV has taken an important first step

http://www.cbc.ca/news/technology/apple-tv-review-1.3293816

What happens in the brain when we learn

Findings could enhance teaching methods and lead to treatments for cognitive problems
October 28, 2015

Isolated cells in the visual cortex of a mouse (credit: Alfredo/Kirkwood (JHU))

A Johns Hopkins University-led research team has proven a working theory that explains what happens in the brain when we learn, as described in the current issue of the journal Neuron.

More than a century ago, Pavlov figured out that dogs fed after hearing a bell eventually began to salivate when they heard the bell ring. The team looked into the question of how Pavlov’s dogs (in “classical conditioning”) managed to associate an action with a delayed reward to create knowledge. For decades, scientists had a working theory of how it happened, but the team is now the first to prove it.

“If you’re trying to train a dog to sit, the initial neural stimuli, the command, is gone almost instantly — it lasts as long as the word sit,” said neuroscientist Alfredo Kirkwood, a professor with the university’s Zanvyl Krieger Mind/Brain Institute. “Before the reward comes, the dog’s brain has already turned to other things. The mystery was, ‘How does the brain link an action that’s over in a fraction of a second with a reward that doesn’t come until much later?’ ”

Eligibility traces

The working theory — which Kirkwood’s team has now validated experimentally — is that invisible “synaptic eligibility traces” effectively tag the synapses activated by the stimuli so that the learning can be cemented with the arrival of a reward. The reward is a neuromodulator* (neurochemical) that floods the dog’s brain with “good feelings.” Though the brain has long since processed the “sit” command, eligibility traces in the synapse respond to the neuromodulators, prompting a lasting synaptic change, a.k.a. “learning.”

The team was able to prove the eligibility-traces theory by isolating cells in the visual cortex of a mouse. When they stimulated the axon of one cell with an electrical impulse, they sparked a response in another cell. By doing this repeatedly, they mimicked the synaptic response between two cells as they process a stimulus and create an eligibility trace.

When the researchers later flooded the cells with neuromodulators, simulating the arrival of a delayed reward, the response between the cells strengthened (“long-term potentiation”) or weakened (“long-term depression”), showing that the cells had “learned” and were able to do so because of the eligibility trace.

“This is the basis of how we learn things through reward,” Kirkwood said, “a fundamental aspect of learning.”

In addition to a greater understanding of the mechanics of learning, these findings could enhance teaching methods and lead to treatments for cognitive problems, the researchers suggest.

Scientists at the University of Texas at Houston and the University of California, Davis were also involved in the research, which was supported by grants from JHU’s Science of Learning Institute and National Institutes of Health.

* The neuromodulators tested were norepinephrine, serotonin, dopamine, and acetylcholine, all of which have been implicated in cortical plasticity (ability to grow and form new connections to other neurons).

Abstract of Distinct Eligibility Traces for LTP and LTD in Cortical Synapses

In reward-based learning, synaptic modifications depend on a brief stimulus and a temporally delayed reward, which poses the question of how synaptic activity patterns associate with a delayed reward. A theoretical solution to this so-called distal reward problem has been the notion of activity-generated “synaptic eligibility traces,” silent and transient synaptic tags that can be converted into long-term changes in synaptic strength by reward-linked neuromodulators. Here we report the first experimental demonstration of eligibility traces in cortical synapses. We demonstrate the Hebbian induction of distinct traces for LTP and LTD and their subsequent timing-dependent transformation into lasting changes by specific monoaminergic receptors anchored to postsynaptic proteins. Notably, the temporal properties of these transient traces allow stable learning in a recurrent neural network that accurately predicts the timing of the reward, further validating the induction and transformation of eligibility traces for LTP and LTD as a plausible synaptic substrate for reward-based learning.

references:
Kaiwen He, Marco Huertas, Su Z. Hong, XiaoXiu Tie, Johannes W. Hell, Harel Shouval, Alfredo Kirkwood. Distinct Eligibility Traces for LTP and LTD in Cortical Synapses. Neuron (2015); DOI: 10.1016/j.neuron.2015.09.037
related:
Johns Hopkins Solves a Longtime Puzzle of How We Learn
How Does Classical Conditioning Work?

http://www.kurzweilai.net/what-happens-in-the-brain-when-we-learn

Controlling acoustic properties with algorithms and computational methods

October 28, 2015

A “zoolophone” with animal shapes automatically created using a computer algorithm. The tone of each key is comparable to those of professionally made instruments as a demonstration of an algorithm for computationally designing an object’s vibrational properties and sounds. (Changxi Zheng/Columbia Engineering)

Computer scientists at Columbia Engineering, Harvard, and MIT have demonstrated that acoustic properties — both sound and vibration — can be controlled by 3D-printing specific shapes.

They designed an optimization algorithm and used computational methods and digital fabrication to alter the shape of 2D and 3D objects, creating what looks to be a simple children’s musical instrument — a xylophone with keys in the shape of zoo animals.

Practical uses

“Our discovery could lead to a wealth of possibilities that go well beyond musical instruments,” says Changxi Zheng, assistant professor of computer science at Columbia Engineering, who led the research team.

“Our algorithm could lead to ways to build less noisy computer fans and bridges that don’t amplify vibrations under stress, and advance the construction of micro-electro-mechanical resonators whose vibration modes are of great importance.”

Zheng, who works in the area of dynamic, physics-based computational sound for immersive environments, wanted to see if he could use computation and digital fabrication to actively control the acoustical property, or vibration, of an object.

Zheng’s team decided to focus on simplifying the slow, complicated, manual process of designing “idiophones” — musical instruments that produce sounds through vibrations in the instrument itself, not through strings or reeds.

The surface vibration and resulting sounds depend on the idiophone’s shape in a complex way, so designing the shapes to obtain desired sound characteristics is not straightforward, and their forms have so far been limited to well-understood designs such as bars that are tuned by careful drilling of dimples on the underside of the instrument.

Optimizing sound properties

To demonstrate their new technique, the team settled on building a “zoolophone,” a metallophone with playful animal shapes (a metallophone is an idiophone made of tuned metal bars that can be struck to make sound, such as a glockenspiel).

Their algorithm optimized and 3D-printed the instrument’s keys in the shape of colorful lions, turtles, elephants, giraffes, and more, modelling the geometry to achieve the desired pitch and amplitude of each part.

“Our zoolophone’s keys are automatically tuned to play notes on a scale with overtones and frequency of a professionally produced xylophone,” says Zheng, whose team spent nearly two years on developing new computational methods while borrowing concepts from computer graphics, acoustic modeling, mechanical engineering, and 3D printing.

“By automatically optimizing the shape of 2D and 3D objects through deformation and perforation, we were able to produce such professional sounds that our technique will enable even novices to design metallophones with unique sound and appearance.”

3D metallophone cups automatically created by computers (credit: Changxi Zheng/Columbia Engineering)

The zoolophone represents fundamental research into understanding the complex relationships between an object’s geometry and its material properties, and the vibrations and sounds it produces when struck.

While previous algorithms attempted to optimize either amplitude (loudness) or frequency, the zoolophone required optimizing both simultaneously to fully control its acoustic properties. Creating realistic musical sounds required more work to add in overtones, secondary frequencies higher than the main one that contribute to the timbre associated with notes played on a professionally produced instrument.

Looking for the most optimal shape that produces the desired sound when struck proved to be the core computational difficulty: the search space for optimizing both amplitude and frequency is immense. To increase the chances of finding the most optimal shape, Zheng and his colleagues developed a new, fast stochastic optimization method, which they called Latin Complement Sampling (LCS).

They input shape and user-specified frequency and amplitude spectra (for instance, users can specify which shapes produce which note) and, from that information, optimized the shape of the objects through deformation and perforation to produce the wanted sounds. LCS outperformed all other alternative optimizations and can be used in a variety of other problems.

“Acoustic design of objects today remains slow and expensive,” Zheng notes. “We would like to explore computational design algorithms to improve the process for better controlling an object’s acoustic properties, whether to achieve desired sound spectra or to reduce undesired noise. This project underscores our first step toward this exciting direction in helping us design objects in a new way.”

Zheng, whose previous work in computer graphics includes synthesizing realistic sounds that are automatically synchronized to simulated motions, has already been contacted by researchers interested in applying his approach to micro-electro-mechanical systems (MEMS), in which vibrations filter RF signals.

Their work—“Computational Design of Metallophone Contact Sounds”—will be presented at SIGGRAPH Asia on November 4 in Kobe, Japan.

The work at Columbia Engineering was supported in part by the National Science Foundation (NSF) and Intel, at Harvard and MIT by NSF, Air Force Research Laboratory, and DARPA.

http://www.kurzweilai.net/controlling-acoustic-properties-with-algorithms-and-computational-methods

Controlling acoustic properties with algorithms and computational methods

October 28, 2015

A “zoolophone” with animal shapes automatically created using a computer algorithm. The tone of each key is comparable to those of professionally made instruments as a demonstration of an algorithm for computationally designing an object’s vibrational properties and sounds. (Changxi Zheng/Columbia Engineering)

Computer scientists at Columbia Engineering, Harvard, and MIT have demonstrated that acoustic properties — both sound and vibration — can be controlled by 3D-printing specific shapes.

They designed an optimization algorithm and used computational methods and digital fabrication to alter the shape of 2D and 3D objects, creating what looks to be a simple children’s musical instrument — a xylophone with keys in the shape of zoo animals.

Practical uses

“Our discovery could lead to a wealth of possibilities that go well beyond musical instruments,” says Changxi Zheng, assistant professor of computer science at Columbia Engineering, who led the research team.

“Our algorithm could lead to ways to build less noisy computer fans and bridges that don’t amplify vibrations under stress, and advance the construction of micro-electro-mechanical resonators whose vibration modes are of great importance.”

Zheng, who works in the area of dynamic, physics-based computational sound for immersive environments, wanted to see if he could use computation and digital fabrication to actively control the acoustical property, or vibration, of an object.

Zheng’s team decided to focus on simplifying the slow, complicated, manual process of designing “idiophones” — musical instruments that produce sounds through vibrations in the instrument itself, not through strings or reeds.

The surface vibration and resulting sounds depend on the idiophone’s shape in a complex way, so designing the shapes to obtain desired sound characteristics is not straightforward, and their forms have so far been limited to well-understood designs such as bars that are tuned by careful drilling of dimples on the underside of the instrument.

Optimizing sound properties

To demonstrate their new technique, the team settled on building a “zoolophone,” a metallophone with playful animal shapes (a metallophone is an idiophone made of tuned metal bars that can be struck to make sound, such as a glockenspiel).

Their algorithm optimized and 3D-printed the instrument’s keys in the shape of colorful lions, turtles, elephants, giraffes, and more, modelling the geometry to achieve the desired pitch and amplitude of each part.

“Our zoolophone’s keys are automatically tuned to play notes on a scale with overtones and frequency of a professionally produced xylophone,” says Zheng, whose team spent nearly two years on developing new computational methods while borrowing concepts from computer graphics, acoustic modeling, mechanical engineering, and 3D printing.

“By automatically optimizing the shape of 2D and 3D objects through deformation and perforation, we were able to produce such professional sounds that our technique will enable even novices to design metallophones with unique sound and appearance.”

3D metallophone cups automatically created by computers (credit: Changxi Zheng/Columbia Engineering)

The zoolophone represents fundamental research into understanding the complex relationships between an object’s geometry and its material properties, and the vibrations and sounds it produces when struck.

While previous algorithms attempted to optimize either amplitude (loudness) or frequency, the zoolophone required optimizing both simultaneously to fully control its acoustic properties. Creating realistic musical sounds required more work to add in overtones, secondary frequencies higher than the main one that contribute to the timbre associated with notes played on a professionally produced instrument.

Looking for the most optimal shape that produces the desired sound when struck proved to be the core computational difficulty: the search space for optimizing both amplitude and frequency is immense. To increase the chances of finding the most optimal shape, Zheng and his colleagues developed a new, fast stochastic optimization method, which they called Latin Complement Sampling (LCS).

They input shape and user-specified frequency and amplitude spectra (for instance, users can specify which shapes produce which note) and, from that information, optimized the shape of the objects through deformation and perforation to produce the wanted sounds. LCS outperformed all other alternative optimizations and can be used in a variety of other problems.

“Acoustic design of objects today remains slow and expensive,” Zheng notes. “We would like to explore computational design algorithms to improve the process for better controlling an object’s acoustic properties, whether to achieve desired sound spectra or to reduce undesired noise. This project underscores our first step toward this exciting direction in helping us design objects in a new way.”

Zheng, whose previous work in computer graphics includes synthesizing realistic sounds that are automatically synchronized to simulated motions, has already been contacted by researchers interested in applying his approach to micro-electro-mechanical systems (MEMS), in which vibrations filter RF signals.

Their work—“Computational Design of Metallophone Contact Sounds”—will be presented at SIGGRAPH Asia on November 4 in Kobe, Japan.

The work at Columbia Engineering was supported in part by the National Science Foundation (NSF) and Intel, at Harvard and MIT by NSF, Air Force Research Laboratory, and DARPA.

http://www.kurzweilai.net/controlling-acoustic-properties-with-algorithms-and-computational-methods