Month: January 2018
BRETT: The Easily Teachable Robot
Apple Watch Series 3 Honest Review! Pros and Cons!
http://www.ibtimes.co.in/zte-eyeing-launch-5g-ready-smartphone-us-by-2018-end-756889
ZTE eyeing to launch 5G-ready smartphone in US by 2018-end
Chinese smartphone maker ZTE is pretty much serious about their mobile business in the United States. The company is aiming to launch its first 5G smartphone in about a year.
Lixin Cheng, the chief executive officer of the company as recently confirmed in an interview with Bloomberg that ZTE is planning to launch a 5G enabled smartphone in the United States at the end of this year or in early 2019.
But it is not sure so far, because the plans may change on the bases of fifth-generation network availability and also the supply of the necessary chipset. Cheng also added that, apart from the 5G ready smartphone, ZTE can also introduce 5G tablets and wireless-internet hub to its portfolio.
According to Bloomberg, AT&T had earlier said that it is planning to provide 5G phone service in a dozen of cities this year. Chipset manufacturer Qualcomm has also talked about the new chips that it will sell to enable this faster wireless connectivity. Without Qualcomm’s chip, ZTE and other phone makers will struggle for proving 5G gadgets.
Meanwhile, it seems that ZTE will be the first company to introduced 5G-ready handsets in the United States. Or maybe rivals like Samsung and Apple can take the lead, even though they haven’t announced any plans on 5G smartphones for this year. Moreover, Apple is already embroiled in a lawsuit with Qualcomm. Let’s see what alternative Apple can bring to its 5G chipsets.
http://www.kurzweilai.net/scientists-map-mammalian-neural-microcircuits-in-precise-detail
Scientists map mammalian neural microcircuits in precise detail
January 12, 2018
Nanoengineered electroporation microelectrodes (NEMs) allow for improved current distribution and electroporation effectiveness by reducing peak voltage regions (to avoid damaging tissue). (left) Cross-section of NEM model, illustrating the total effective electroporation volume and its distribution of the voltage around the pipette tip, at a safe current of 50 microamperes. (Scale bar = 5 micrometers.) (right) A five-hole NEM after successful insertion into brain tissue, imaged with high-resolution focused ion beam (FIB). (Scale bar = 2 micrometers) (credit: D. Schwartz et al./Nature Communications)
Neuroscientists at the Francis Crick Institute have developed a new technique to map electrical microcircuits* in the brain at far more detail than existing techniques*, which are limited to tiny sections of the brain (or remain confined to simpler model organisms, like zebrafish).
In the brain, groups of neurons that connect up in microcircuits help us process information about things we see, smell and taste. Knowing how many neurons and other types of cells make up these microcircuits would give scientists a deeper understanding of how the brain computes complex information.
Nanoengineered microelectrodes
The researchers developed a new design called “nanoengineered electroporation** microelectrodes” (NEMs). They were able to use an NEM to map out all 250 cells that make up a specific microcircuit in a part of a mouse brain that processes smell (known as the “olfactory bulb glomerulus”) in a horizontal slice of the olfactory bulb — something never before achieved.
To do that, the team created a series of tiny pores (holes) near the end of a micropipette using nano-engineering tools. The new design distributes the electrical current uniformly over a wider area (up to a radius of about 50 micrometers — the size of a typical neural microcircuit), with minimal cell damage.
The researchers tested the NEM technique with a specific microcircuit, the olfactory bulb glomerulus (which detects smells). They were able to identify detailed, long-range, complex anatomical features (scale bar = 100 micrometers). (White arrows identify parallel staining of vascular structures.) (credit: D. Schwartz et al./Nature Communications)
Seeing 100% of the cells in a brain microcircuit for the first time
Unlike current methods, the team was able to stain up to 100% of the cells in the microcircuit they were investigating, according to Andreas Schaefer, who led the research, which was published in open-access Nature Communications today (Jan. 12, 2018).
“As the brain is made up of repeating units, we can learn a lot about how the brain works as a computational machine by studying it at this [microscopic] level,” he said. “Now that we have a tool of mapping these tiny units, we can start to interfere with specific cell types to see how they directly control behavior and sensory processing.”
The work was conducted in collaboration with researchers at the Max-Planck-Institute for Medical Research in Heidelberg, Heidelberg University, Heidelberg University Hospital, University College London, the MRC National Institute for Medical Research, and Columbia University Medical Center.
* Scientists currently use color-tagged viruses or charged dyes with applied electroporation current to stain brain cells. These methods, using a glass capillary with a single hole, are limited to low current (higher current could damage tissue), so they can only allow for identifying a limited area of a microcircuit.
** Electroporation is a microbiology technique that applies an electrical field to cells to increase the permeability (ease of penetration) of the cell membrane, allowing (in this case) fluorophores (fluorescent, or glowing dyes) to penetrate into the cells to label (identify parts of) the neural microcircuits (including the “inputs” and “outputs”) under a microscope.
Abstract of Architecture of a mammalian glomerular domain revealed by novel volume electroporation using nanoengineered microelectrodes
Dense microcircuit reconstruction techniques have begun to provide ultrafine insight into the architecture of small-scale networks. However, identifying the totality of cells belonging to such neuronal modules, the “inputs” and “outputs,” remains a major challenge. Here, we present the development of nanoengineered electroporation microelectrodes (NEMs) for comprehensive manipulation of a substantial volume of neuronal tissue. Combining finite element modeling and focused ion beam milling, NEMs permit substantially higher stimulation intensities compared to conventional glass capillaries, allowing for larger volumes configurable to the geometry of the target circuit. We apply NEMs to achieve near-complete labeling of the neuronal network associated with a genetically identified olfactory glomerulus. This allows us to detect sparse higher-order features of the wiring architecture that are inaccessible to statistical labeling approaches. Thus, NEM labeling provides crucial complementary information to dense circuit reconstruction techniques. Relying solely on targeting an electrode to the region of interest and passive biophysical properties largely common across cell types, this can easily be employed anywhere in the CNS.
The Conceptual Approaches of the Emergence Theory Analogue to General Relativity
https://www.iol.co.za/motoring/industry-news/in-the-future-man-and-machine-will-merge-12702373
‘In the future, man and machine will merge’
Johannesburg – Scientists and futurists are predicting that the next stage of human evolution will be to technologically enhance ourselves by merging with machines.
In his book The Singularity is Near, author and inventor Ray Kurzweil predicts that as early as 2045 (that’s just 27 years away), humans enhanced by artificial intelligence will expand their intelligence by a factor of trillions – once we perfect the human-machine interface, that is.
A possible stepping stone to this seemingly far-fetched ideal is Nissan’s research into vehicles that can interpret signals from the driver’s brain, redefining how people interact with their cars.
Self-driving cars will become commonplace a lot sooner than 2045, with the common view being that they will suck all the fun out of driving, but Nissan’s Brain-to-Vehicle technology promises to speed up reaction times for drivers and lead to cars that keep adapting to make driving more enjoyable.
Nissan demonstrated the capabilities of this exclusive technology at the recent Consumer Electronics Show in Las Vegas, Nevada. B2V is the latest development in Nissan Intelligent Mobility, the company’s vision for transforming how cars are driven, powered and integrated into society.
Detecting discomfort
“When most people think about autonomous driving, they have a very impersonal vision of the future, where humans relinquish control to the machines,” said a Nissan spokesman. “Yet B2V technology does the opposite, by using signals from their own brain to make the drive even more exciting and enjoyable.”
The breakthrough from Nissan is the result of research into using brain decoding technology to predict a driver’s actions and detect discomfort. By catching signs that the driver’s brain is about to initiate a movement, such as turning the steering wheel or pushing the accelerator pedal, driver assist technologies can begin the action more quickly. This can improve reaction times and enhance manual driving.
The driver wears a headset that measures brain wave activity, which is then analyzed by autonomous systems. By anticipating intended movement, the systems can take actions – such as turning the steering wheel or slowing the car – 0.2 to 0.5 seconds faster than the driver, while remaining largely imperceptible. Moreover, by detecting and evaluating driver discomfort, artificial intelligence can change the driving configuration or driving style when in autonomous mode.
Other possible uses include adjusting the vehicle’s cabin environment. For example, the technology can use augmented reality to adjust what the driver sees and create a more relaxing environment.
“The potential applications of the technology are incredible,” the spokesman said. “This research will be a catalyst for more Nissan innovation inside our vehicles in the years to come.”
Also at the Consumer Electronic Show in Las Vegas, Mercedes-Benz demonstrated its “Ask Mercedes” app which allows drivers to ask a virtual assistant for help on how to operate car features. The new service makes use of artificial intelligence (AI) and combines a chatbot with augmented reality functions.
Users can type questions into their smartphones or simply ask them by using the voice recognition software. Ask the assistant questions like “how can I link my mobile phone with my car” or “what is Sport+”, and it provides the answer with the aid of augmented reality.
Controls and displays in a new E- or S-Class can be scanned using a smartphone camera, and the system is able to explain the displayed functions to the customer. Aim your smartphone at the dashboard and numbers will be superimposed onto the controls in the camera image. Then simply tap the number to see more detailed information.
It’s a great help in modern cars that are getting ever more loaded with high-tech features that can be complicated to use. It sure beats having to jab away at a touchscreen of your new car, looking for a hidden sub-menu, just trying to figure out how to adjust the bass on the sound system. And it’s a lot quicker than thumbing through a thick owner’s manual.
The “Ask Mercedes” app understands naturally spoken language and questions formulated in a wide variety of ways. “How can I drive more economically?” is understood just as easily as “What is Dynamic Select”, for example. But even questions about the Mercedes-Benz brand and the Daimler company, such as “Who is the head of Mercedes?”, are also answered.
Artificial intelligence is truly revolutionising the car, and in just a few years it should be common for all cars to have AI assistants for voice, gesture and facial recognition as well as augmented reality.
https://www.ctvnews.ca/health/could-lack-of-sunlight-be-why-we-often-gain-weight-in-winter-1.3755210
Could lack of sunlight be why we often gain weight in winter?
If you tend to put on a little weight during the short days of winter, Alberta researchers may have stumbled on a reason why.
Researchers with the University of Alberta’s Alberta Diabetes Institute have discovered that the fat cells that lie just beneath our skin shrink when exposed to the blue light emitted by the sun, reducing their ability to store fat.
What this could mean is that the opposite holds true too: that our fat cells hold onto fat in low light levels, such as in the short days of the winter, and that the reduced sunlight we experience in northern climates might actually be promoting fat storage during winter.
The researchers caution their findings are preliminary. A lot more research is needed to understand how sunlight affects fat cells, which is why the team says they are not ready to recommend exposing skin to the damaging effects of the sun in a bid to lose weight. But they say their findings are intriguing and deserve more study.
The team made their findings almost by accident. They had been investigating how to use light to engineer fat cells in petri dishes to produce insulin, in the hopes of finding a potential treatment for Type 1 diabetes, says Peter Light, a professor of pharmacology and the director of UAlberta’s Alberta Diabetes Institute.
What they noticed, though, was that the fat cells responded to light by shrinking and releasing lipid droplets.
“We hadn’t expected to actually see that,” he told CTV News Channel Thursday. “It was a bit, I guess, of a serendipitous finding because we weren’t really looking for that pathway but we came across these results.”
It’s already well known that that sunlight we receive through our eyes affects our circadian rhythms, signalling to us when it’s time to wake and sleep. It’s why we are advised not to stare at digital devices before bed because the blue light of the screens confuses our bodies.
His team’s findings suggest sunlight may have the same impact on subcutaneous fat cells, perhaps as a part of an evolutionary process that signals the cells about when to store and shed fat.
“What we found is that it is exactly the same molecular pathway in our fat cells as that found in our eyes, so I think the two might be linked quite closely,” he said.
Light stresses the findings are preliminary but they provoke some interesting questions that other researchers will want to explore.
“What our work does is it opens up a new avenue for us and other researchers to now look at the effects of different types of sunlight and duration and intensity and see how much it affects the behaviour of fat cells to store fat and also release fat as well,” he said.
Light’s team’s findings are presented in the latest issue of Scientific Reports.
https://singularityhub.com/2018/01/10/darker-still-black-mirrors-new-season-envisions-neurotech-gone-wrong/
Darker Still: Black Mirror’s New Season Envisions Neurotech Gone Wrong
The key difference between science fiction and fantasy is that science fiction is entirely possible because of its grounding in scientific facts, while fantasy is not. This is where Black Mirror is both an entertaining and terrifying work of science fiction. Created by Charlie Brooker, the anthological series tells cautionary tales of emerging technology that could one day be an integral part of our everyday lives.
While watching the often alarming episodes, one can’t help but recognize the eerie similarities to some of the tech tools that are already abundant in our lives today. In fact, many previous Black Mirror predictions are already becoming reality.
The latest season of Black Mirror was arguably darker than ever. This time, Brooker seemed to focus on the ethical implications of one particular area: neurotechnology.
Emerging Neurotechnology
Warning: The remainder of this article may contain spoilers from Season 4 of Black Mirror.
Most of the storylines from season four revolve around neurotechnology and brain-machine interfaces. They are based in a world where people have the power to upload their consciousness onto machines, have fully immersive experiences in virtual reality, merge their minds with other minds, record others’ memories, and even track what others are thinking, feeling, and doing.
How can all this ever be possible? Well, these capabilities are already being developed by pioneers and researchers globally. Early last year, Elon Musk unveiled Neuralink, a company whose goal is to merge the human mind with AI through a neural lace. We’ve already connected two brains via the internet, allowing one brain to communicate with another. Various research teams have been able to develop mechanisms for “reading minds” or reconstructing memories of individuals via devices. The list goes on.
With many of the technologies we see in Black Mirror it’s not a question of if, but when. Futurist Ray Kurzweil has predicted that by the 2030s we will be able to upload our consciousness onto the cloud via nanobots that will “provide full-immersion virtual reality from within the nervous system, provide direct brain-to-brain communication over the internet, and otherwise greatly expand human intelligence.” While other experts continue to challenge Kurzweil on the exact year we’ll accomplish this feat, with the current exponential growth of our technological capabilities, we’re on track to get there eventually.
Ethical Questions
As always, technology is only half the conversation. Equally fascinating are the many ethical and moral questions this topic raises.
For instance, with the increasing convergence of artificial intelligence and virtual reality, we have to ask ourselves if our morality from the physical world transfers equally into the virtual world. The first episode of season four, USS Calister, tells the story of a VR pioneer, Robert Daley, who creates breakthrough AI and VR to satisfy his personal frustrations and sexual urges. He uses the DNA of his coworkers (and their children) to re-create them digitally in his virtual world, to which he escapes to torture them, while they continue to be indifferent in the “real” world.
Audiences are left asking themselves: should what happens in the digital world be considered any less “real” than the physical world? How do we know if the individuals in the virtual world (who are ultimately based on algorithms) have true feelings or sentiments? Have they been developed to exhibit characteristics associated with suffering, or can they really feel suffering? Fascinatingly, these questions point to the hard problem of consciousness—the question of if, why, and how a given physical process generates the specific experience it does—which remains a major mystery in neuroscience.
Towards the end of USS Calister, the hostages of Daley’s virtual world attempt to escape through suicide, by committing an act that will delete the code that allows them to exist. This raises yet another mind-boggling ethical question: if we “delete” code that signifies a digital being, should that be considered murder (or suicide, in this case)? Why shouldn’t it? When we murder someone we are, in essence, taking away their capacity to live and to be, without their consent. By unplugging a self-aware AI, wouldn’t we be violating its basic right to live in the same why? Does AI, as code, even have rights?
Brain implants can also have a radical impact on our self-identity and how we define the word “I”. In the episode Black Museum, instead of witnessing just one horror, we get a series of scares in little segments. One of those segments tells the story of a father who attempts to reincarnate the mother of his child by uploading her consciousness into his mind and allowing her to live in his head (essentially giving him multiple personality disorder). In this way, she can experience special moments with their son.
With “no privacy for him, and no agency for her” the good intention slowly goes very wrong. This story raises a critical question: should we be allowed to upload consciousness into limited bodies? Even more, if we are to upload our minds into “the cloud,” at what point do we lose our individuality to become one collective being?
These questions can form the basis of hours of debate, but we’re just getting started. There are no right or wrong answers with many of these moral dilemmas, but we need to start having such discussions.
The Downside of Dystopian Sci-Fi
Like last season’s San Junipero, one episode of the series, Hang the DJ, had an uplifting ending. Yet the overwhelming majority of the stories in Black Mirrorcontinue to focus on the darkest side of human nature, feeding into the pre-existing paranoia of the general public. There is certainly some value in this; it’s important to be aware of the dangers of technology. After all, what better way to explore these dangers before they occur than through speculative fiction?
A big takeaway from every tale told in the series is that the greatest threat to humanity does not come from technology, but from ourselves. Technology itself is not inherently good or evil; it all comes down to how we choose to use it as a society. So for those of you who are techno-paranoid, beware, for it’s not the technology you should fear, but the humans who get their hands on it.
While we can paint negative visions for the future, though, it is also important to paint positive ones. The kind of visions we set for ourselves have the power to inspire and motivate generations. Many people are inherently pessimistic when thinking about the future, and that pessimism in turn can shape their contributions to humanity.
While utopia may not exist, the future of our species could and should be one of solving global challenges, abundance, prosperity, liberation, and cosmic transcendence. Now that would be a thrilling episode to watch.
https://www.theverge.com/2018/1/11/16878412/iphone-slowdown-battery-replacement-wait-times-6-plus-supply-shortage
Apple won’t replace your old iPhone 6 Plus battery until March because of short supplies
/cdn.vox-cdn.com/uploads/chorus_image/image/58289313/IMG_1047-2040.0.0.jpg)
When Apple promised to replace old iPhone batteries for $29 after its slowdown debacle, there was always going to be a queue for new parts. Well, for owners of the iPhone 6 Plus, the queue will likely be a bit longer than expected. According to internal documents seen by MacRumors, Apple won’t have batteries in stock for the 2014 device until late March to early April.
Wait times for replacement batteries have understandably fluctuated as Apple adapts to this unexpected increase in demand. When the $29 replacement offer was first made in December, the company said the batteries would be available in late January, before updating that timeframe later in the month to available “right away.”
Now, in an internal document distributed to Apple stores and authorized service providers, we have the latest estimates for wait times. As per MacRumors’ write-up, that means “approximately two weeks” for the iPhone 6 and iPhone 6s Plus; a late March to early April date for the iPhone 6 Plus; and batteries for the iPhone 6s, iPhone 7, iPhone 7 Plus, and iPhone SE will be, in Apple’s own words, “available without extended delays.” Whatever that means.
These estimates are for the US, and there are likely to be regional variations across Europe, the Middle East, and Africa. We’ve contacted Apple to try to confirm times.
Apple first made the offer for $29 battery replacements last December, after benchmark tests showed significant slowdowns in old devices after they received the latest software updates. Although this finding seemed to confirm the “Apple slows old iPhones” meme, the company said this throttling was needed to compensate for the (unavoidable) degraded performance of old batteries.
In other words, Apple decided on behalf of its customers that they’d prefer an iPhone that performed worse for the same amount of time, than an iPhone that performed just as wellfor a shorter amount of time. It’s a decision that does nothing to dispel the characterization of Apple as a company that does what it can to push customers into buying new phones.
But at least now you can get a $29 mea culpa from Apple — even if you have to wait for it.