Though the safe had a million possible combinations using three two-digit numbers, the last number had slightly larger indents on the dial — reducing the possible combinations to just 10,000. And in addition, “the team also discovered that the safe’s design allows for a margin of error to compensate for humans getting their combination slightly wrong” — which meant that the robot only had to check every third number. “Using this method, they could cut down the number of possible combinations to around 1,000.”
“Some SentrySafe models come with an additional lock and key, but the team was able to unlock it by using a Bic pen.”
Noam Chomsky: We’re ‘Lucky’ Trump Is a Clown—but Our Luck May Be Running Out
Green Tea Compound May Protect Body And Brain
Though green tea isn’t wildly popular in the U.S., it’s actually one of the most consumed beverages across the globe. And the many people for whom it’s a staple may be the healthier for it, both physically and mentally. Green tea has some well-known antioxidants, namely EGCG, a catechin that’s also found in berries and apples. A new study looks at how EGCG may counter some of the deleterious effects of a fatty, high-sugar diet (i.e., a typical Western diet). The caveat is that the study was done in mice, so it’s not totally clear how the results would apply to us—but given what we know about how antioxidants function, and the overlaps between mice and humans, it’s likely that they apply to us as well.
The researchers split young adult mice into three groups. One group, serving as controls, ate a standard lab chow diet and drank plain water. Another group ate a high-fat diet (with 45% of their calories coming from fat), and drank water spiked with fructose. A third group ate the same high-fat, high-sugar diet as above, but their water was additionally laced with EGCG.
At the end of 16 weeks on their respective regimens, the researchers measured mice’s body weight, insulin function, genetic expression and cognitive function.
It turned out that, as expected, the mice eating the high-fat/high-sugar diet were heavier than those eating a regular diet. But they were also significantly heavier than those who’d eaten a high-fat/high-sugar diet that was supplemented with EGCG—in other words, the addition of EGCG seemed to counter the effects of the bad diet. The ECGC-consuming mice also performed better in several ways on the Morris water maze, a classic test of cognitive and memory function in rodents.
What’s interesting about the study was that the team also illuminated some of the mechanisms behind the connection. For instance, they found that insulin function in the central nervous system was better in the EGCG-exposed mice. The compound also seemed to have a neuroprotective effect on the brain, specifically protecting new neurons in the hippocampus, the part of the brain that governs learning and memory. Finally, EGCG affected the expression of genes involved in appetite regulation, which are known to be dysregulated when an animal or person consumes a high-fat, high-sugar diet.
And the results are likely relevant to us as well, especially given the existing research on green tea consumption and human health. For example, some studies have suggested that regular green tea drinkers have a lower risk of cancer, including breast cancer. Other work has shown that serious green tea drinkers (five cups/day) had a 28% reduced risk of heart disease (interestingly, the same connection didn’t exist for black tea).
It also fits into what we know about the effects of various diets on health: For example, there’s good evidence that, in addition to weight gain and metabolic syndrome, a typically Western diet is also linked to cognitive decline and Alzheimer’s disease. On the flip side, plant-based diets like the Mediterranean diet are linked to better body weight, better cognition and reduced dementia risk—it’s likely that one of the reasons for this is the higher antioxidant content of the plant-based diet.
One caveat is that many of the existing green tea studies have been done in Asian populations, who may be eating a very different diet from people in the U.S. So there could be other variables at play, and more complex interactions at work. Therefore, the study definitely isn’t license to eat junk food in the hopes that chasing it with green tea will offset the damage. It’s more an exploration of how powerful the effects of dietary antioxidants can be.
That said, if you currently drink green tea, keep it up. And if you don’t, feel free to give it a try. Just don’t forget to do the rest of the things we know to be part of a healthy diet and lifestyle as well.
“Green tea is the second most consumed beverage in the world after water, and is grown in at least 30 countries,” said study author Xuebo Liu in a news release. “The ancient habit of drinking green tea may be a more acceptable alternative to medicine when it comes to combating obesity, insulin resistance and memory impairment.”
Facebook AI Creates Its Own Language In Creepy Preview Of Our Potential Future
Facebook shut down an artificial intelligence engine after developers discovered that the AI had created its own unique language that humans can’t understand. Researchers at the Facebook AI Research Lab (FAIR) found that the chatbots had deviated from the script and were communicating in a new language developed without human input. It is as concerning as it is amazing – simultaneously a glimpse of both the awesome and horrifying potential of AI.
Artificial Intelligence is not sentient—at least not yet. It may be someday, though – or it may approach something close enough to be dangerous. Ray Kurzweil warned years ago about the technological singularity. The Oxford dictionary defines “the singularity” as, “A hypothetical moment in time when artificial intelligence and other technologies have become so advanced that humanity undergoes a dramatic and irreversible change.”
To be clear, we aren’t really talking about whether or not Alexa is eavesdropping on your conversations, or whether Siri knows too much about your calendar and location data. There is a massive difference between a voice-enabled digital assistant and an artificial intelligence. These digital assistant platforms are just glorified web search and basic voice interaction tools. The level of “intelligence” is minimal compared to a true machine learning artificial intelligence. Siri and Alexa can’t hold a candle to IBM’s Watson.
Scientists and tech luminaries, including Elon Musk, Bill Gates, and Steve Wozniak have warned that AI could lead to tragic unforeseen consequences. Infamous physicist Stephen Hawking cautioned in 2014 that AI could mean the end of the human race. “It would take off on its own and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
Why is this scary? Think SKYNET from Terminator, or WOPR from War Games. Our entire world is wired and connected. An artificial intelligence will eventually figure that out – and figure out how to collaborate and cooperate with other AI systems. Maybe the AI will determine that mankind is a threat, or that mankind is an inefficient waste of resources – conclusions that seems plausible from a purely logical perspective.
Machine learning and artificial intelligence have phenomenal potential to simplify, accelerate, and improve many aspects of our lives. Computers can ingest and process massive quantities of data and extract patterns and useful information at a rate exponentially faster than humans, and that potential is being explored and developed around the world.
I am not saying the sky is falling. I am not saying we need to pull the plug on all machine learning and artificial intelligence and return to a simpler, more Luddite existence. We do need to proceed with caution, though. We need to closely monitor and understand the self-perpetuating evolution of an artificial intelligence, and always maintain some means of disabling it or shutting it down. If the AI is communicating using a language that only the AI knows, we may not even be able to determine why or how it does what it does, and that might not work out well for mankind.
There Are Secrets Hiding Inside Apple’s HomePod Speaker
A living programmable biocomputing device based on RNA
July 28, 2017
Synthetic biologists at Harvard’s Wyss Institute for Biologically Inspired Engineering and associates have developed a living programmable “ribocomputing” device based on networks of precisely designed, self-assembling synthetic RNAs(ribonucleic acid). The RNAs can sense multiple biosignals and make logical decisions to control protein production with high precision.
As reported in Nature, the synthetic biological circuits could be used to produce drugs, fine chemicals, and biofuels or detect disease-causing agents and release therapeutic molecules inside the body. The low-cost diagnostic technologies may even lead to nanomachines capable of hunting down cancer cells or switching off aberrant genes.
Biological logic gates
Similar to a digital circuit, these synthetic biological circuits can process information and make logic-guided decisions, using basic logic operations — AND, OR, and NOT. But instead of detecting voltages, the decisions are based on specific chemicals or proteins, such as toxins in the environment, metabolite levels, or inflammatory signals. The specific ribocomputing parts can be readily designed on a computer.
The research was performed with E. coli bacteria, which regulate the expression of a fluorescent (glowing) reporter protein when the bacteria encounter a specific complex set of intra-cellular stimuli. But the researchers believe ribocomputing devices can work with other host organisms or in extracellular settings.
Previous synthetic biological circuits have only been able to sense a handful of signals, giving them an incomplete picture of conditions in the host cell. They are also built out of different types of molecules, such as DNAs, RNAs, and proteins, that must find, bind, and work together to sense and process signals. Identifying molecules that cooperate well with one another is difficult and makes development of new biological circuits a time-consuming and often unpredictable process.
Brain-like neural networks next
Ribocomputing devices could also be freeze-dried on paper, leading to paper-based biological circuits, including diagnostics that can sense and integrate several disease-relevant signals in a clinical sample, the researchers say.
The next stage of research will focus on the use of RNA “toehold” technology* to produce neural networks within living cells — circuits capable of analyzing a range of excitatory and inhibitory inputs, averaging them, and producing an output once a particular threshold of activity is reached. (Similar to how a neuron averages incoming signals from other neurons.)
Ultimately, researchers hope to induce cells to communicate with one another via programmable molecular signals, forming a truly interactive, brain-like network, according to lead author Alex Green, an assistant professor at Arizona State University’s Biodesign Institute.
Wyss Institute Core Faculty member Peng Yin, Ph.D., who led the study, is also Professor of Systems Biology at Harvard Medical School.
The study was funded by the Wyss Institute’s Molecular Robotics Initiative, a Defense Advanced Research Projects Agency (DARPA) Living Foundries grant, and grants from the National Institute of Health (NIH), the Office of Naval Research (ONR), the National Science Foundation (NSF) and the Defense Threat Reduction Agency (DTRA).
* The team’s approach evolved from its previous development of “toehold switches” in 2014 — programmable hairpin-like nano-structures made of RNA. In principle, RNA toehold wwitches can control the production of a specific protein: when a desired complementary “trigger” RNA, which can be part of the cell’s natural RNA repertoire, is present and binds to the toehold switch, the hairpin structure breaks open. Only then will the cell’s ribosomes get access to the RNA and produce the desired protein.
Wyss Institute | Mechanism of the Toehold Switch
Abstract of Complex cellular logic computation using ribocomputing devices
Synthetic biology aims to develop engineering-driven approaches to the programming of cellular functions that could yield transformative technologies. Synthetic gene circuits that combine DNA, protein, and RNA components have demonstrated a range of functions such as bistability, oscillation, feedback, and logic capabilities. However, it remains challenging to scale up these circuits owing to the limited number of designable, orthogonal, high-performance parts, the empirical and often tedious composition rules, and the requirements for substantial resources for encoding and operation. Here, we report a strategy for constructing RNA-only nanodevices to evaluate complex logic in living cells. Our ‘ribocomputing’ systems are composed of de-novo-designed parts and operate through predictable and designable base-pairing rules, allowing the effective in silico design of computing devices with prescribed configurations and functions in complex cellular environments. These devices operate at the post-transcriptional level and use an extended RNA transcript to co-localize all circuit sensing, computation, signal transduction, and output elements in the same self-assembled molecular complex, which reduces diffusion-mediated signal losses, lowers metabolic cost, and improves circuit reliability. We demonstrate that ribocomputing devices in Escherichia coli can evaluate two-input logic with a dynamic range up to 900-fold and scale them to four-input AND, six-input OR, and a complex 12-input expression (A1 AND A2 AND NOT A1*) OR (B1 AND B2 AND NOT B2*) OR (C1 AND C2) OR (D1 AND D2) OR (E1 AND E2). Successful operation of ribocomputing devices based on programmable RNA interactions suggests that systems employing the same design principles could be implemented in other host organisms or in extracellular settings.