3rd-gen Amazon Echo and Echo Dot now available in Canada By Bradly Shankar@bradshankarOCT 16, 20196:56 AM
Amazon has officially launched its newest Alexa-powered Echo and Echo Dot models in Canada. The 3rd gen Echo Dot’s most notable addition is its four-digit LED panel, which can display time, outdoor temperature and timers. The new Echo Dot costs $79.99 CAD and can be ordered from Amazon.ca in either ‘Charcoal,’ ‘Heather Gray,’ ‘Plum’ or ‘Sandstone.’ Meanwhile, the 3rd gen Echo features better speakers and a revised fabric-based design. The smart speaker can be ordered from Amazon.ca for $129.99 CAD in ‘Charcoal,’ ‘Heather Gray,’ ‘Sandstone’ or ‘Twilight Blue.’ For more on Amazon’s new fall products, check out MobileSyrup‘s hands-on impressions.
RUSSIAN SCIENTISTS JUST MADE THE FIRST LAB-GROWN MEATLOAF
Ochakov Food Ingredients Plant produced lab-grown meatloaf. The meatloaf is Russia’s first sample of clean meat, also called cultured or cell-based meat.
Scientists created slaughter-free lab-grown meatloaf.
Liam writes about environmental and social sustainability, and the protection of animals. He has a BA Hons in English Literature and Film and also writes for Sustainable Business Magazine. Liam is interested in intersectional politics and DIY music.
Scientists in Russia have produced the country’s first sample of clean meat, also called cultured or cell-based meat. Food technology company Ochakov Food Ingredients Plant (OKPI) cultivated lab-grown meatloaf.
Clean meat is created through the in vitro cultivation of animal cells. Cellular agriculturalists obtain a small cell sample — in this case from an Aberdeen Angus cow — and place the sample in a controlled cultivator with a nutrient-rich solution called growth medium. This causes the stem cells to multiply as though they were still in the animal’s body.
The result is a product that delivers the same look, texture, and taste as conventional meat. However, the process can be completed without harming any animals.
“In vitro meat, also known as cultivated meat, is a very promising direction for the meat industry,” says Nikolai Shimanovsky, the project curator and a molecular pharmacologist.
“From our point of view, laboratory meat production has the highest ethical significance for modern society,” adds Shimanovsky. “Since we can avoid the slaughter of living creatures to obtain meat for food.”
Clean meat provides a sustainable, slaughter-free alternative to animal agriculture.
Consumers Ditch Traditional Meat
Europe’s meat consumption has decreased by 20 percent in the space of two to three months. This trend can be witnessed elsewhere in the world. More than 80 percent of Americans have indicated that they would like to swap out meat for cruelty-free alternatives.
A survey by lab-grown meat producer Memphis Meats found that 60 percent of consumers would try clean meat if it was more affordable. Clean meat is expensive to make, however, producers are keen to lower the item’s cost as soon as viable.
OKPI’s new meatloaf costs around 5,800 rubles ($91) per kg. OKPI predicts that prices could drop to 800 rubles ($12) per kg by the time it hits supermarkets. The company highlights that its slaughter-free meat has double the shelf life of traditional meat.
In addition to OKPI, companies around the world including Mosa Meat, JUST, New Age Meats, Integriculture, and Biotech Foods are working to streamline the process of cultivating meat. According to data by the British research company Starcom, 41 percent of British people could be eating lab-grown meat in the next ten years.
Imagine an engine that needs no propellant. It sounds impossible, and it most likely is.
That’s not stopping one NASA engineer from testing theories around the EmDrive — a conceptual “helical” engine that could defy the laws of physics and create forward thrust without fuel.
What is the EmDrive?
Back in 2001, British scientist Roger Shawyer theorized that we could generate thrust by pumping microwaves into a conical chamber.
Shawyer suggested that the microwaves would, in theory, bounce exponentially off the chamber walls, creating enough propulsion to power a spacecraft without fuel.
Some researchers do claim to have generated thrust in EmDrive experiments. The amount was so low, though, that the detractors believe the thrust may have even been caused by outside influences. These could be seismic vibrations or the Earth’s magnetic field.
New research
Over the last few months, several engineers and scientists have come out with contradictory positions on the EmDrive.
Some have claimed it’s impossible, while others continue to work at what might be a futile task, justifying their work by saying the payoff would be enormous.
The most recent of these is NASA engineer David Burns, as New Scientist reports.
“The engine itself would be able to get to 99 percent the speed of light if you had enough time and power,” Burns told New Scientist.
However, it would need to be huge — 200 meters long and 12 meters in diameter — and powerful, needing 165 megawatts of power to generate just 1 newton of thrust. This is about the same force a person uses to type on a keyboard.
Therefore, the engine would only be able to reach high-velocity speeds in the frictionless conditions of space.
Einsteinian physics
As Futurism writes, the engineer proposes accelerating a loop of ions to almost light speed before changing their speeds — and therefore, their mass, as per Einstein’s law of relativity.
This, in theory, would cause exponential forward thrust without any need for fuel.
This information is going to cause some controversy, because everything we knew, or thought we knew, about Tesla’s Model Y timeline is about to change. Originally, the Model Y was supposed to start shipping in “late 2020.” Currently, Tesla’s website says new orders will ship in Q1 2021. However, according to an anonymous source with a proven and reliable track record, Tesla is about to accelerate its plans and start production sometime around Q1 2020 at its Fremont factory. (We’ve also received the same information independently through a second-hand source.)
The Model Y is expected to launch with the Long Range all-wheel drive (AWD) trim, according to this source, and Tesla will release the Standard Range Plus (SR+) version once production has been ramped up and the Model Y SR+ has a large enough gross margin. That is how Tesla has done it previously.
Tesla has learned a lot since it started manufacturing the Model 3, and it’s possible that the ramp up will be much quicker than with the Model 3, but we don’t have a firm estimate of when the SR+ would start rolling off the line.
One other bit of important speculation here is that GF1 is still supply constrained. In that case, Tesla would want to focus customers as much as possible on the Model Y Performance and Long Range variants to ensure overall profitability. (Just as a quick side note, for any critic reading this article, this doesn’t mean that Tesla is unprofitable — it simply means the company is being logical and responsible in order to maximize profitability while expanding its lineup.)
Another fun little tidbit of technical information we have been told is that the Model Y will indeed use the new revolutionary flex-cable circuitry that reduces the length of wires needed throughout the car and also gives every component a redundant connection to the battery and the computer. This makes it possible for robots to install more of the “guts and veins” of the car and cut down on the manual labor involved in installing cables. As Elon Musk has said multiple times, robots suck at placing normal cables into the vehicle.
Many Model Y components will be similar to those in the Model 3, but not identical, including the battery packs.
This obviously raises a ton of questions, here are some and how they will change the grand scheme of things:
Is Tesla going to repeat the production hell it went through in Q4 2018/Q1 2019?
It’s possible, but not very likely. Tesla’s Model 3 was the first car Tesla ever decided to build at the scale of hundreds of thousands per year. It was also an almost entirely new vehicle with few shared parts between it and the Model S or X. The company made a lot of mistakes and learned a lot from those mistakes. The Model Y and Model 3 share about 75% of their components, so the learning curve should be much easier.
How did Tesla manage to pull this off?
In the last year or two, many in the media have bashed on Tesla like it was their guilty pleasure, writing up all kinds of misinformation, FUD (fear, uncertainty, and doubt), and predictions of doom. (Bankwuptcy is willy coming!) The media claimed that the Model 3 was behind schedule, that Tesla couldn’t mass manufacture it, that the Model 3 was unprofitable, that the Standard Range trim would never come, that Tesla in general is unprofitable, and more. Tesla has refuted each claim simply with its actions time and time again.
One of Elon’s base instincts is to set very ambitious timelines, a strategy that has always paid off. Even if that overly ambitious goal was not met on time, it was still met much quicker than it would have been with a conservative timeline. There is a pretty famous example about the beginning of The Boring Company. Elon decided he wanted to do it and that he will do it right outside in the parking lot and asked how soon they could clear out all the cars and everything else to start digging. The first timeline given to him was a week. Elon gave them 24 hours and they broke ground within 48. We received similar stories when interviewing Tesla President Jerome Guillen regarding Model 3 production.
With the Model Y, I am willing to bet that Elon’s timeline for starting Model Y manufacturing was around the end of 2019, and after a lot of convincing from was persuaded to say Q1 2021 on stage — for the first time separating internal targets from public ones. Now, instead of the news claiming that Elon is late, they will claim he is a miracle worker and the stock will go up since he is basically “playing the game” that gets Wall Street and the media to portray Tesla’s good side. If anyone is familiar with the engineer Scotty from Star Trek and how he was a “miracle worker,” well, this is pretty much the same thing. (This is simply my expectation, of course. It is not investment advice.)
For the record, Tesla representatives have not yet responded to an inquiry about this leaked news. We will update this article if they do.
Where will the Model Y be manufactured?
According to the information we currently have, Tesla will be manufacturing the Model Y at its Fremont factory. Something I have been looking to report on but never got around to is speculation about where in the factory Tesla could place GA5. While the next image does not show the layout of the manufacturing lines (of which we have been able to draw a speculative map but are not publishing today), it does show where we think GA5 will be.
Part of this speculation is based on something Elon Musk said during the 2019 Q1 investor call:
“Credit goes to the Tesla team that actually looked at how could we do this in Fremont, if we had to, and we feel like we can actually append building space to the west side of the building and use a lot of internal space that is currently used for warehousing in Fremont factory, and so we believe it actually can be done with minimal disruption to add Model Y to Fremont.”
Now, the area that Tesla has currently labeled as a service center is technically speaking not something that necessarily has to be attached to the main factory complex. Tesla’s seats, for example, are manufactured in a factory a few miles away. The same can be done with the service center. With the unibody stamping technologies we have seen Tesla patent and the fact that the Model Y and Model 3 share 75% of their components, it’s quite possible that parts of those lines can be combined. There are rumors that Tesla plans to combine the Model S and X lines. We have no information to offer on this, but one thing we do know is that Model S and Model X make use of a lot of manual work and have a lot fewer robots than GA3, and those lines could theoretically use a tech upgrade.
Will they start manufacturing the Model Y in Shanghai at the same time?
Our primary source told us that there are currently no plans to start manufacturing the Model Y at GF3 in Q1 2020. The Model 3 has been manufactured for over a year. There were a lot of small problems with the Model 3 and the way the vehicle is manufactured that have since been solved, and the process in general has been smoothed out. With the Model Y, while it does share many of the same components, Tesla will likely want to make many small changes to how it’s manufactured before copy-pasting it in Shanghai.
We are likely to hear more about this during the Q3 investor call slated for October 23rd, and just like the last few times, we will be live on YouTube to present it with a lot of the extra context that we also provided last time. More on that soon.
Our secondary source heard that more information regarding the Y will come at the Tesla pickup event next month. We’ll see. In the meantime, if you learn more, drop us a note.
As noted above, Tesla has not yet responded to an inquiry about the leaked news. We will update this article if the company does so.
Some minor adjustments have been made to this article after pubishing to clarify language/details.
Follow CleanTechnica on Google News.
It will make you happy & help you live in peace for the rest of your life.
Chanan Bos Chanan grew up in a multicultural, multi-lingual environment that often gives him a unique perspective on a variety of topics. He is always in thought about big picture topics like AI, quantum physics, philosophy, Universal Basic Income, climate change, sci-fi concepts like the singularity, misinformation, and the list goes on. Currently, he is studying creative media & technology but already has diplomas in environmental sciences as well as business & management. His goal is to discourage linear thinking, bias, and confirmation bias whilst encouraging out-of-the-box thinking and helping people understand exponential progress. Chanan is very worried about his future and the future of humanity. That is why he has a tremendous admiration for Elon Musk and his companies, foremost because of their missions, philosophy, and intent to help humanity and its future. He sees Tesla as one of the few companies that can help us save ourselves from climate change.
Simple care could improve eyesight in more than a billion of 2.2 billion individuals who live with a visual impairment or blindness globally, a World Health Organisation (WHO) report has suggested.
According to WHO, there is a wide inequality in sight and eye conditions in low and middle-income countries as compared with countries with high per capita income.
The report said eye impairment risk in low and middle-income countries was up to eight times higher than in wealthy countries, with people living in rural areas, ethnic minorities, women and older people suffering disproportionately.
At least one billion people with these conditions, including short and far-sightedness, cataracts, and glaucoma, could have been prevented or treated but suffered because of a lack of accessibility to healthcare services.
WHO Director-General Dr. Tedros Adhanom Ghebreyesus said it was “unacceptable” that 65 million people were blind or had impaired sight when their vision could have been corrected overnight with a cataract operation.
He said the figure accounted than 800 million for those who struggled in everyday activities because of lack of access to a pair of glasses, with people in low and middle-income regions being four times less likely to receive help than in high-income areas.
The report estimated a cost of $14.3billion (£11.7billion) for the treatment of over a billion people already living with visual impairment or blindness from cataracts, and short and far-sightedness.
Kirsty Smith, chief executive of disability inclusion NGO Christian Blind Mission UK, said there were still millions of people worldwide needlessly losing their sight to conditions like cataracts despite decades of work to improve access to eye health.
“In recent years, we’ve seen new challenges such as increasing levels of diabetes-related sight loss and aging-related visual impairment across the world,” Smith added.
The WHO report said an investment of $5.8 billion could prevent or avoid impaired vision or blindness caused by glaucoma, diabetes, and trachoma in about 11.9 million people.
Calling for action, Juliet Milgate, director of policy and advocacy at Sightsavers, said serious challenges remained and there were significant unmet needs in the field of eye care. “It will hopefully lead to greater awareness, political will and better eye health (availability) for all, particularly those who are most marginalized.”
The WHO argued countries must include eye care in national health plans and provide essential packages of care in their journey towards universal health coverage.
“Poor vision is a ticking time-bomb that impacts the education, work, and quality of life of around a third of the world’s population, particularly women,” The Guardian quoted James Chen, founder of initiatives Clearly and Vision for a Nation, as saying.
Pointing to a recent study from China investigating the clear link between time spent outdoors and the delayed onset of later-stage short-sightedness, Who’s Stuart Keel said staying inside rarely relaxed the lens in the eye. “When you’re indoors, the lens inside your eyes is in a complete flex state, or it’s flexed but when you’re outside, it’s nice and relaxed,” Keel explained.
Artificial intelligence research organization OpenAI has achieved a new milestone in its quest to build general purpose, self-learning robots. The group’s robotics division says Dactyl, its humanoid robotic hand first developed last year, has learned to solve a Rubik’s cube one-handed. OpenAI sees the feat as a leap forward both for the dexterity of robotic appendages and its own AI software, which allows Dactyl to learn new tasks using virtual simulations before it is presented with a real, physical challenge to overcome.
In a demonstration video showcasing Dactyl’s new talent, we can see the robotic hand fumble its way toward a complete cube solve with clumsy yet accurate maneuvers. It takes many minutes, but Dactyl is eventually able to solve the puzzle. It’s somewhat unsettling to see in action, if only because the movements look noticeably less fluid than human ones and especially disjointed when compared to the blinding speed and raw dexterity on display when a human speedcuber solves the cube in a matter of seconds.
But for OpenAI, Dactyl’s achievement brings it one step closer to a much sought-after goal for the broader AI and robotics industries: a robot that can learn to perform a variety of real-world tasks, without having to train for months to years of real-world time and without needing to be specifically programmed.
“Plenty of robots can solve Rubik’s cubes very fast. The important difference between what they did there and what we’re doing here is that those robots are very purpose-built,” says Peter Welinder, a research scientist and robotics lead at OpenAI. “Obviously there’s no way you can use the same robot or same approach to perform another task. The robotics team at OpenAI have very different ambitions. We’re trying to build a general purpose robot. Similar to how humans and how our human hands can do a lot of things, not just a specific task, we’re trying to build something that is much more general in its scope.”
Welinder is referencing a series of robots over the last few years that have pushed Rubik’s cube solving far beyond the limitations of human hands and minds. In 2016, semiconductor maker Infineon developed a robot specifically to solve a Rubik’s cube at superhuman speeds, and the bot managed to do so in under one second. That crushed the sub-five-second human world record at the time. Two years later, a machine developed by MIT solved a cube in less than 0.4 seconds. In late 2018, a Japanese YouTube channel called Human Controller even developed its own self-solving Rubik’s cube using a 3D-printed core attached to programmable servo motors.
In other words, a robot built for one specific task and programmed to perform that task as efficiently as possible can typically best a human, and Rubik’s cube solving is something software has long ago mastered. So developing a robot to solve the cube, even a humanoid one, is not all that remarkable on its own, and less so at the sluggish speed Dactyl operates.
But OpenAI’s Dactyl robot and the software that powers it are much different in design and purpose than a dedicated cube-solving machine. As Welinder says, OpenAI’s ongoing robotics work is not aimed at achieving superior results in narrow tasks, as that only requires you develop a better robot and program it accordingly. That can be done without modern artificial intelligence.
Instead, Dactyl is developed from the ground up as a self-learning robotic hand that approaches new tasks much like a human would. It’s trained using software that tries, in a rudimentary way at the moment, to replicate the millions of years of evolution that help us learn to use our hands instinctively as children. That could one day, OpenAI hopes, help humanity develop the kinds of humanoid robots we know only from science fiction, robots that can safely operate in society without endangering us and perform a wide variety of tasks in environments as chaotic as city streets and factory floors.
To learn how to solve a Rubik’s cube one-handed, OpenAI did not explicitly program Dactyl to solve the toy; free software on the internet can do that for you. It also chose not to program individual motions for the hand to perform, as it wanted it to discern those movements on its own. Instead, the robotics team gave the hand’s underlying software the end goal of solving a scrambled cube and used modern AI — specifically a brand of incentive-based deep learning called reinforcement learning — to help it along the path toward figuring it out on its own. The same approach to training AI agents is how OpenAI developed its world-class Dota 2 bot.
But until recently, it’s been much easier to train an AI agent to do something virtually — playing a computer game, for example — than to train it to perform a real-world task. That’s because training software to do something in a virtual world can be sped up, so that the AI can spend the equivalent of tens of thousands of years training in just months of real-world time, thanks to thousands of high-end CPUs and ultra-powerful GPUs working in parallel.
Doing that same level of training performing a physical task with a physical robot isn’t feasible. That’s why OpenAI is trying to pioneer new methods of robotic training using simulated environments in place of the real world, something the robotics industry has only barely experimented with. That way, the software can practice extensively at an accelerated pace across many different computers simultaneously, with the hope that it retains that knowledge when it begins controlling a real robot.
Because of the training limitation and obvious safety concerns, robots used commercially today do not utilize AI and instead are programmed with very specific instructions. “The way it’s been approached in the past is that you use very specialized algorithms to solve tasks, where you have an accurate model of both the robot and the environment in which you’re operating,” Welinder says. “For a factory robot, you have very accurate models of those and you know exactly the environment you’re working on. You know exactly how it will be picking up the particular part.”
This is also why current robots are far less versatile than humans. It requires large amounts of time, effort, and money to reprogram a robot that assembles, say, one specific part of an automobile or a computer component to do something else. Present a robot that hasn’t been properly trained with even a simple task that involves any level of human dexterity or visual processing and it would fail miserably. With modern AI techniques, however, robots could be modeled like humans, so that they can use the same intuitive understanding of the world to do everything from opening doors to frying an egg. At least, that’s the dream.
We’re still decades away from that level of sophistication, and the leaps the AI community has made on the software side — like self-driving cars, machine translation, and image recognition — has not exactly translated to next-generation robots. Right now, OpenAI is just trying to mimic the complexity of one human body part and to get that robotic analog to operate more naturally.
That’s why Dactyl is a 24-joint robotic hand modeled after a human hand, instead of the claw or pincer style robotic grippers you see in factories. And for the software that powers Dactyl to learn how to utilize all of those joints in a way a human would, OpenAI put it through thousands of years of training in simulation before trying the physical cube solve.
Image: OpenAI
“If you’re training things on the real world robot, obviously whatever you’re learning is working on what you actually want to deploy your algorithm on. In that way, it’s much simpler. But algorithms today need a lot of data. To train a real world robot, to do anything complex, you need many years of experience,” Welinder says. “Even for a human, it takes a couple of years, and humans have millions of years of evolution to have the learning capabilities to operate a hand.”
In a simulation, however, Welinder says training can be accelerated, just like with game-playing and other tasks popular as AI benchmarks. “This takes on the order of thousands of years to train the algorithm. But this only takes a few days because we can parallelize the training. You also don’t have to worry about the robots breaking or hurting someone as you’re training these algorithms,” he adds. Yet researchers have in the past has run into considerable trouble trying to get virtual training to work on physical robots. OpenAI says it is among the first organizations to really see progress in this regard.
When it was given a real cube, Dactyl put its training to use and solved it on its own, and it did so under a variety of conditions it had never been explicitly trained for. That includes solving the cube one-handed with a glove on, with two of its fingers taped together, and while OpenAI members continuously interfered with it by poking it with other objects and showering it with bubbles and pieces of confetti-like paper.
“We found that in all of those perturbations, the robot was still able to successfully turn the Rubik’s cube. But it did not go through that in training,” says Matthias Plappert, Welinder’s fellow OpenAI’s robotic team lead. “The robustness that we found when we tried this on the physical robot was surprising to us.”
That’s why OpenAI sees Dactyl’s newly acquired skill as equally important for both the advancement of robotic hardware and AI training. Even the most advanced robots in the world right, like the humanoid and dog-like bots developed by industry leader Boston Dynamics, cannot operate autonomously, and they require extensive task-specific programming and frequent human intervention to carry out even basic actions.
OpenAI says Dactyl is a small but vital step toward the kind of robots that might one day perform manual labor or household tasks and even work alongside humans, instead of in closed-off environments, without any explicit programming governing their actions.
In that vision for the future, the ability for robots to learn new tasks and adapt to changing environments will be as much about the flexibility of the AI as it is about the robustness of the physical machine. “These methods are really starting to demonstrate that these are the solutions to handling all the inherent complication and the messiness of the physical world we live in,” Plappert says.
Some homeowners with Tesla solar panels said they had been left frustrated as they wait for the company to fixed damaged panels on their roof.
On August 1, the roof of Briana Greer’s home in Colorado caught fire as she waited for Tesla to send a crew to look at her panels. The company has yet to investigate the situation, she said.
Greer said that Tesla didn’t properly maintain the panels. Homeowners in states from Maryland to Arizona with Tesla solar panels have also found dealing with Tesla to be frustrating, and they’ve been forced to pay regular fees as their systems have been shut off.
Business Insider sent Tesla an extensive list of claims made by customers and current and former solar employees for this story. Tesla did not reply to repeated requests for comment via phone calls, emails, or text messages.
Briana Greer was out of town when the fire started in her Tesla solar roof panels. Luckily, her neighbors in Louisville, Colorado — a town outside Boulder — were vigilant, and they were able to put out the fire before the fire department arrived.
That was on August 1. The day before, Greer said, Tesla had contacted her to let her know its system had been detecting voltage fluctuations for a couple of days. The company said it would send a crew to check it out on August 8. That was too late.
Greer, an environmental consultant, said she had yet to receive a report explaining why any of this happened.
“They purposely keep a lot of people in the dark. For an energy company, that’s ironic,” Greer told Business Insider in an interview last month.
Tesla did not respond to multiple requests for comment on this article, but a local Fox station in Colorado reported last month that Tesla told it that “its solar panels are safe and very rarely catch fire.” The Fox report also said that Tesla said it was working with Greer’s insurance company.
Tesla has not agreed to let her out of her contract, so Greer set up a GoFundMe to raise funds for an attorney to deal with this matter.
Greer said she believes Tesla was in breach of its agreement with her and Xcel, a third-party electric company that installed her meter and connected Tesla to the grid. Her contract with Tesla, viewed by Business Insider, says Tesla maintains the solar panels according to manufacturer specifications.
Xcel did not respond to a request for comment.
Greer’s panels were made by a solar-panel manufacturer called Trina, whose handbook says its panels should be physically inspected twice a year. Tesla was not doing that, Greer said.
Trina did not respond to a request for comment.
Greer’s contract also said that Tesla should maintain the panels according to state law. In 2017, the year Greer had her panels installed, Colorado adopted the National Electrical Code. But Greer, who provided Business Insider with diagrams of her system, said Tesla did not update her solar panels to code. For example, the NEC 2017 rules require all solar panels to be capable of a rapid shutdown at the module level, and according to Greer, the system that caught fire did not have that.
Tesla did not respond to multiple requests for comment.
In an email dated September 23 and viewed by Business Insider, a Tesla representative told Greer that the company did not have maintenance records “aside from remote monitoring and react
The interconnections and communication between different regions of the human brain influence behavior in many ways. This is also true for individual differences in higher cognitive abilities. The brains of more intelligent individuals are characterized by temporally more stable interactions in neural networks. This is the result of a recent study conducted by Dr. Kirsten Hilger and Professor Christian Fiebach from the Department of Psychology and Brain Imaging Center of Goethe University Frankfurt in collaboration with Dr. Makoto Fukushima and Professor Olaf Sporns from Indiana University Bloomington, U.S. The study was published online in the scientific journal Human Brain Mapping on 6th October.
Intelligence and its neurobiological basis
Various theories have been proposed to explain the differences in individuals’ cognitive abilities, including neurobiological models. For instance, it has been proposed that more intelligent individuals make stronger use of certain brain areas, that their brains generally operate more efficiently, or that certain brain systems are better wired in smarter people. Only recently have methodological advances made it possible to investigate the temporal dynamics of human brain networks using functional magnetic resonance imaging (fMRI). An international team of researchers from Goethe University and Indiana University Bloomington analyzed fMRI scans of 281 participants to investigate how dynamic network characteristics of the human brain relate to general intelligence.
Stability of brain networks as general advantage
The human brain has a modular organization—it can be subdivided into separate networks that serve different functions such as vision, hearing, or the control of voluntary behavior. In their current study, Kirsten Hilger and colleagues investigated whether this modular organization of the human brain changes over time, and whether or not these changes relate to individual differences in the scores that study participants achieved in an intelligence test.
The results of the study show that the modular brain network organization of more intelligent persons exhibits fewer fluctuations during the fMRI measurement session. This increased stability of brain network organization was primarily found in brain systems that are important for the control of attention.
Attention plays a key role
“The study of the temporal dynamics of human brain networks using fMRI is a relatively new field of research” says Hilger. “The temporally more stable network organization in more intelligent individuals could be a protective mechanism of the brain against falling into maladaptive network states in which major networks disconnect and communication may be hampered.”
She also stresses that it remains an open question how these network properties influence cognitive ability: “At present, we do not know whether the temporally more stable brain connections are a source or a consequence of higher intelligence. However, our results suggest that processes of controlled attention—that is, the ability to stay focused and to concentrate on a task—may play an important role for general intelligence.”
More information: Kirsten Hilger et al. Temporal stability of functional brain modules associated with human intelligence, Human Brain Mapping (2019). DOI: 10.1002/hbm.24807
The TX-GAIA combines nearly 100 Intel processors and 900 Nvidia GPU accelerators to create a system optimized for artificial intelligence applications.
By Sara Friedman
10/03/19
TX-GAIA is housed inside of a new EcoPOD, manufactured by Hewlett Packard Enterprise, at the site of the Lincoln Laboratory Supercomputing Center in Holyoke, Massachusetts. Photo: Glen Cooper.
The system leverages nearly 100 Intel processors and 900 Nvidia GPU accelerators to combine high-performance computing with hardware optimized for AI. TX-GAIA was built based on the HPE Apollo system. The supercomputer is currently ranked as the most powerful artificial intelligence supercomputer at any university in the world by TOP500.
“We are thrilled by the opportunity to enable researchers across Lincoln and MIT to achieve incredible scientific and engineering breakthroughs,” said Jeremy Kepner, an LLSC fellow who heads the LLSC. “TX-GAIA will play a large role in supporting AI, physical simulation and data analysis across all Laboratory missions.”
TX-GAIA is specifically designed to perform deep neural network operations (DNN) quickly. DNNs are currently used for speech recognition and computer vision applications, such as powering Amazon’s Alexa or helping self-driving cars recognize objects in their surroundings.
The latest system will also be used to crunch terabytes of data to train machine learning algorithms and support LLSC’s research and development initiatives. Scientists at LLSC are working to find new ways to improve weather forecasting, accelerate medical data analysis, build autonomous systems, design synthetic DNA and develop new materials and devices.
TX-GAIA is housed in a new modular data center at the LLSC’s green, hydroelectrically powered site in Holyoke, MA.
ABOUT THE AUTHOR
Sara Friedman is a reporter/producer for Campus Technology, THE Journal and STEAM Universe covering education policy and a wide range of other public-sector IT topics.
Friedman is a graduate of Ithaca College, where she studied journalism, politics and international communications.