https://www.sciencedaily.com/releases/2018/01/180117131202.htm

A cell holds 42 million protein molecules, scientists reveal

Date:
January 17, 2018
Source:
University of Toronto
Summary:
Scientists have finally put their finger on how many protein molecules there are in a cell, ending decades of guesswork and clearing the way for further research on how protein abundance affects health of an organism.
Yeast cells expressing proteins that carry green and red fluorescent tags to make them visible.
Credit: Brandon Ho

It’s official — there are some 42 million protein molecules in a simple cell, revealed a team of researchers led by Grant Brown, a biochemistry professor in the University of Toronto’s Donnelly Centre for Cellular and Biomolecular Research. Analyzing data from almost two dozen large studies of protein abundance in yeast cells, the team was able to produce for the first time reliable estimates for the number of molecules for each protein, as revealed in a study published this week in the journal Cell Systems.

The work was done in collaboration with Anastasia Baryshnikova, a U of T alum and now Principal Investigator at Calico, a California biotechnology company that focuses on aging.

Proteins make up our cells and do most of the work in them. This way, they bring genetic code to life because the recipes for building proteins are stored within the genes’ DNA code.

Explaining the work, Brown said that given that “the cell is the functional unit of biology, it’s just a natural curiosity to want to know what’s in there and how much of each kind.”

Curiosity notwithstanding, there’s another reason why scientists would want to tally up proteins. Many diseases are caused by either having too little or too much of a certain protein. The more scientists know about how protein abundance is controlled, the better they’ll be able to fix it when it goes awry.

Although researchers have studied protein abundance for years, the findings were reported in arbitrary units, sowing confusion in the field and making it hard to compare data between different labs.

Many groups, for example, have estimated protein levels by sticking a fluorescent tag on protein molecules and inferring their abundance from how much the cells glow. But the inevitable differences in instrumentation meant that different labs recorded different levels of brightness emitted by the cells. Other labs measured proteins levels using completely different approaches.

“It was hard to conceptualize how many proteins there are in the cell because the data was reported on drastically different scales,” said Brandon Ho, graduate student in the Brown lab who did most of the work on the project.

To convert arbitrary measures into the number of molecules per cell, Ho turned to baker’s yeast, an easy to study single-cell microbe that offers a window into how a basic cell works. Yeasts are also the only organism for which there was enough data available to calculate molecule number for each of the 6,000 proteins encoded by the yeast genome thanks to 21 separate studies that measured abundance of all yeast proteins. No such datasets exist for human cells where each cell type contains only a subset of proteins encoded by the 20,000 human genes.

The wealth of existing yeast data meant that Ho could put it all together, benchmark it and convert the vague measures of protein abundance into “something that makes sense, in other words, molecules per cell,” said Brown.

Ho’s analysis reveals for the first time how many molecules of each protein there are in the cell, with a total number of molecules estimated to be around 42 million. The majority of proteins exist within a narrow range — between 1000 and 10,000 molecules. Some are outstandingly plentiful at more than half a million copies, while others exist in fewer than 10 molecules in a cell.

Analyzing the data, the researchers were able to glean insights into the mechanisms by which cells control abundance of distinct proteins, paving the way for similar studies in human cells that could help reveal molecular roots of disease. They also showed that a protein’s supply correlates with its role in the cell, which means that it may be possible to use the abundance data to predict what proteins are doing.

Finally, in a finding that will rejoice cell biologists everywhere, Ho showed that the common practice of stitching glowing tags onto proteins has little effect on their abundance. While the approach has revolutionized the study of protein biology, netting its discoverers Osamu Shimomura, Martin Chalfie and Roger Tsien the Nobel prize in chemistry in 2008, it also stoked worries that tagging could affect protein durability, which would flaw the data.

“This study will be of great value to the entire yeast community and beyond,” said Robert Nash, senior biocurator of the Saccharomyces Genome Database that will make the data available to researchers worldwide. He also added that by presenting protein abundance “in a common and intuitive format, the Brown lab has provided other researchers with the opportunity to reexamine this data and thereby facilitate study-to-study comparisons and hypothesis generation.”

The research was funded by the Canadian Cancer Society Research Institute.

Story Source:

Materials provided by University of Toronto. Original written by Jovana Drinjakovic. Note: Content may be edited for style and length.


Journal Reference:

  1. Brandon Ho, Anastasia Baryshnikova, Grant W. Brown. Unification of Protein Abundance Datasets Yields a Quantitative Saccharomyces cerevisiae ProteomeCell Systems, 2018; DOI: 10.1016/j.cels.2017.12.004

Cite This Page:

University of Toronto. “A cell holds 42 million protein molecules, scientists reveal.” ScienceDaily. ScienceDaily, 17 January 2018. <www.sciencedaily.com/releases/2018/01/180117131202.htm>.

http://www.bbc.com/news/health-42809445

First monkey clones created in Chinese laboratory

Media captionFootage courtesy of: Qiang Sun and Mu-ming Poo / Chinese Academy of Sciences

Two monkeys have been cloned using the technique that produced Dolly the sheep.

Identical long-tailed macaques Zhong Zhong and Hua Hua were born several weeks ago at a laboratory in China.

Scientists say populations of monkeys that are genetically identical will be useful for research into human diseases.

But critics say the work raises ethical concerns by bringing the world closer to human cloning.

Qiang Sun of the Chinese Academy of Sciences Institute of Neuroscience said the cloned monkeys will be useful as a model for studying diseases with a genetic basis, including some cancers, metabolic and immune disorders.

“There are a lot of questions about primate biology that can be studied by having this additional model,” he said.

Zhong Zhong, one of the first two monkeys created by somatic cell nuclear transferImage copyrightCHINESE ACADEMY OF SCIENCES
Image captionZhong Zhong was created by somatic cell nuclear transfer

Zhong Zhong was born eight weeks ago and is named after the word for Chinese nation or people. Hua Hua was born six weeks ago.

The researchers say the monkeys are being bottle fed and are currently growing normally. They expect more macaque clones to be born over the coming months.

‘Not a stepping stone’

Prof Robin Lovell-Badge of The Francis Crick Institute, London, said the technique used to clone Zhong Zhong and Hua Hua remains “a very inefficient and hazardous procedure”.

“The work in this paper is not a stepping-stone to establishing methods for obtaining live born human clones,” he said.

WATCH: Dolly – the world’s most famous sheep

Prof Darren Griffin of the University of Kent said the approach may be useful in understanding human diseases, but raised ethical concerns.

“Careful consideration now needs to be given to the ethical framework under which such experiments can, and should, operate,” he said.

Dolly made history 20 years ago after being cloned at the Roslin Institute in Edinburgh. It was the first time scientists had been able to clone a mammal from an adult cell, taken from the udder.

Dolly the sheepImage copyrightSCIENCE PHOTO LIBRARY
Image captionDolly the sheep was the first mammal to be cloned 20 years ago

Since then many other mammals have been cloned using the same somatic cell nuclear transfer technique (SCNT), including cattle, pigs, dogs, cats, mice and rats. This involves transferring DNA from the nucleus of a cell to a donated egg cell that is then prompted to develop into an embryo.

Zhong Zhong and Hua Hua are the first non-human primates cloned through this technique.

In 1999, a rhesus monkey embryo was split in two in order to create two identical twins. One of the baby monkeys born through that technique – called Tetra – has the title of the world’s first cloned monkey, but it did not involve the complex process of DNA transfer.

‘Much failure’

In the study, published in the journal Cell, scientists used DNA from foetal connective tissue cells.

After the DNA was transferred to donated eggs, genetic reprogramming was used to alter genes that would otherwise have stopped the embryo developing.

Hua Hua, one of the first monkey clones made by somatic cell nuclear transferImage copyrightCHINESE ACADEMY OF SCIENCES
Image captionHua Hua was born six weeks ago in a Shanghai lab

Zhong Zhong and Hua Hua were the result of 79 attempts. Two other monkeys were initially cloned from a different type of adult cell, but failed to survive.

Dr Sun said: “We tried several different methods, but only one worked. There was much failure before we found a way to successfully clone a monkey.”

The scientists say they followed strict international guidelines for animal research, set by the US National Institutes of Health.

Co-researcher Dr Muming Poo, also of the Chinese Academy of Sciences in Shanghai, said: “We are very aware that future research using non-human primates anywhere in the world depends on scientists following very strict ethical standards.”

https://www.theverge.com/2018/1/24/16927040/ai-neuromorphic-engineering-computing-mit-brain-chip

MIT researchers say new chip design takes us closer to computers that work like our brains

New designs for ‘neuromorphic’ processors could mean faster, cheaper AI

Inside a (non-neuromorphic) chip built by Foxonn.

Advances in machine learning have moved at a gallop in recent years, but the computer processors these programs run on have barely changed. To remedy this, companies have been re-tuning existing chip architecture to fit the demands of AI, but on the cutting edge of research, an entirely new approach is taking shape: remaking processors so they work more like our brains.

This is called “neuromorphic computing,” and scientists from MIT this week said they’ve made significant progress in getting this new breed of chips up and running. Their research, published in the journal Nature Materials, could eventually lead to processors that run machine learning tasks with lower energy demands — up to 1,000 times less. This would enable us to give more devices AI abilities like voice and image recognition.

To understand what these researchers have done, you need to know a little about neuromorphic chips. The key difference between these processors and the ones used in your computer is that they process data in an analog, rather than a digital fashion. This means that instead of sending information in a series of on / off electrical bursts, they vary the intensity of these signals — just like our brain’s synapses do.

This means that more information can be packed into each jolt, drastically reducing the amount of power needed. It’s like the difference between morse code and speech. The former encodes data using just two outputs, dots, and dashes — making meanings easy to understand but lengthy to communicate. Speech, by comparison, can be difficult to interpret (think fuzzy phone lines and noisy cafes) but each individual utterance holds much more data.

A big difficulty with building neuromorphic chips, though, is being able to precisely control these analog signals. Their intensity needs to vary, yes, but in a controlled and consistent fashion.

Attempts to find a suitable medium for these varying electrical signals to travel through have previously been unsuccessful, because the current ends up spreading out all over the place. To fix this, researchers led by MIT’s Jeehwan Kim, used crystalline forms of silicon and geranium that resemble lattices at the microscopic level. Together, these create clear pathways for the electrical signals, leading to much less variance in the strength of the signals.

“This is the most uniform device we could achieve, which is the key to demonstrating artificial neural networks,” Kim told MIT News.

To test this premise, Kim and his team created a simulation of their new chip design, with the same degree of variance in signals. Using it, they were able to train a neural network that could recognize handwriting (a standard training task for new forms of AI) with 95 percent accuracy. That’s less than the 97 percent baseline using existing algorithms and chips, but it’s promising for new technology.

There’s a long way to go before we’ll know whether neuromorphic chips are suitable for mass production and real-world usage. But when you’re trying to redesign how computers think from the ground up you have to put in a lot of work. Making sure neuromorphic chips are firing their electric synapses in order is just the start.

https://www.thestar.com/business/2018/01/24/apple-ready-to-expand-into-speaker-market-with-homepod.html

Apple ready to expand into speaker market with HomePod

The HomePod includes a voice-activated assistant to help people manage their homes and the rest of their lives. Apple relies on Siri to process commands.

SAN FRANCISCO—Apple is finally ready to launch its attempt to compete with the internet-connected speakers made by Amazon and Google with the release of its long-awaited HomePod.

Preorders for the HomePod will begin Friday in the U.S, U.K. and Australia, two weeks before the speaker goes on sale in stores for $349 (U.S.).

Apple had intended to release the HomePod last month during the holiday shopping season, but delayed its debut to refine the product.

Both Amazon’s Echo and Google’s Home speakers have been expanding their reach into people’s homes since Apple announced the HomePod last June. Amazon and Google also are selling their speakers for substantially less, with streamlined versions of their devices available for below $50.

Read more:

Amazon launches Echo, Alexa in Canada

Google unveils latest Pixel smartphone, the company’s answer to Apple’s iPhone

Like its rivals, the HomePod includes a voice-activated assistant to help people manage their homes and the rest of their lives. Apple relies on Siri to process commands while Amazon leans on Alexa and Google features the eponymous Assistant.

All three companies are trying to build digital command centres within homes in an attempt to sell some of their other products.

In Apple’s case, the company has designed the HomePod as a high-fidelity speaker tied to its music-streaming service, which charges $10 per month for unlimited access to a library consisting of about 45 million songs. The HomePod is supposed to be able to learn people’s musical tastes so it can automatically find and play songs that they will like.

https://9to5mac.com/2018/01/23/watchos-4-2-1/

watchOS 4.2.2 is now available for Apple Watch

https://9to5mac.com/2018/01/23/ios-11-2-5/

IOS 11.2.5 is now available for iPhone and iPad, adds HomePod support and more

Apple has officially release iOS 11.2.5 for iPhone, iPad, and iPod touch. The release build version is 15D60. The update primarily adds support for HomePod ahead of its release next month, but includes a few other goodies as well.

The new version is available on Apple’s developer center and is rolling out to all users now.

HomePod support

– Setup and automatically transfer your Apple ID, Apple Music, Siri and Wi-Fi settings to HomePod.

Siri News

– Siri can now read the news, just ask, “Hey Siri, play the news”. You can also ask for specific news categories including Sports, Business or Music.

Other improvements and fixes

– Addresses an issue that could cause the Phone app to display incomplete information in the call list – Fixes an issue that caused Mail notifications from some Exchange accounts to disappear from the Lock screen when unlocking iPhone X with Face ID – Addresses an issue that could cause Messages conversations to temporarily be listed out of order – Fixes an issue in CarPlay where Now Playing controls become unresponsive after multiple track changes – Adds ability for VoiceOver to announce playback destinations and AirPod battery level

The new version should also fix the chaiOS message bug discovered this month.


Subscribe to 9to5Mac on YouTube for more Apple news:

 

https://boingboing.net/2018/01/23/timescale-mismatch.html

If humans gave up on geoengineering after 50 years, it could be far worse than if we had done nothing at all

In Potentially dangerous consequences for biodiversity of solar geoengineering implementation and termination (published in Nature Ecology and Evolution, Sci-Hub mirror), a group of cross-institutional US climate scientists model what would happen if human embarked upon a solar geoengineering project to mitigate the greenhouse effect by aerosolizing reflective particles into the atmosphere, then gave up on the project after a mere half-century.

The model shows catastrophic outcomes: while the project is running, habitats are well-protected from climate change and animals cease chasing new latitudes they can survive in. But when the project is abandoned, all the effects of climate change are felt at once, and any capacity to outrace climate change through migration is wiped out, along with the species that could have benefited from it.

50 years is a very long time in politics, and an eyeblink in geological timescales. Any geoengineering project will have to be unthinkably well-insulated from shifts in the political winds in order to avoid jeopardizing the planet.

All living things have to move four and six times faster than they do today, and around three times faster than they would in the non-geoengineering temperatures in 2070, to survive the heat. In tropical areas, which have the highest biodiversity, organisms would have to move at around 9.8 kilometers per year, or around 6 miles per year, which is far above the maximum speed the majority of organisms can move. This is especially bad for tropical species, which are less heat tolerant. Figure 2 in the paper comparing temperature velocities with and without geoengineering at 2020, when it is initially implemented, and in 2070, after it has been terminated.

Animals have to move at these high speeds on over 90 percent of the Earth’s surface. Ecosystems start to fragment, a scenario in which the climate becomes so drastically different in an area that species must move in different directions to find suitable habitat. Imagine lions and zebras, both currently found in southern Africa, fleeing in different directions.

“In many cases, temperature would change in one direction and precipitation would change in another direction,” Alan Robock, a climate scientist based at Rutgers University and one of the co-authors, told me in a phone interview. Some habitats with specific temperatures and rainfalls would no longer exist. Once this happens, it’s pretty easy to imagine the whole ecosystem crumbling down.

https://www.washingtonpost.com/news/the-switch/wp/2018/01/23/apples-homepod-is-finally-coming-heres-how-it-stacks-up-against-amazon-and-google/?utm_term=.13e8a38ab7bd

Apple’s HomePod is finally coming. Here’s how it stacks up against Amazon and Google.


Apple’s HomePod was introduced during the keynote address at the Worldwide Developers Conference at the McEnery Convention Center in San Jose, California, USA, 05 June 2017. It was then delayed; it will arrive in stores on Feb. 9. (Courtesy of Apple)

Apple on Tuesday announced a Feb 9. release date for its smart speaker called the HomePod — allowing the company that first popularized virtual assistants through its smartphones to finally enter the market for virtual assistants in the home.

The looming question for the company, though, is whether it is too late.

Apple is releasing its smart speaker nearly three years after the Amazon Echo and after the Google Home, which launched in 2016. Apple also originally scheduled the HomePod to hit the market in December 2017 before the critical holiday shopping season. But the device was delayed until early 2018, past the time for avid shopping and gift giving.

For those wondering whether they should buy Apple’s new thing — pre-orders are set for Friday — here’s a look at how the HomePod stacks up.

Price: By setting the price of the HomePod at $350, Apple appears to be hoping consumers will see it as a premium device.

For that reason, the HomePod is probably more comparable to the $400 Google Home Max, which is Google’s most expensive smart speaker, than cheaper versions that go for $130. The HomePod also is far more expensive than Amazon’s highest-end Echo speaker, the $150 Echo Plus, which is more focused on smart home setup than audio. Amazon’s main Echo device costs $100. (Amazon chief executive Jeffrey P. Bezos is the owner of The Washington Post.)

In essence, Apple hopes consumers will believe the HomePod is of much higher-quality than others on the market — which is not an easy sell for a niche device.

Apple has been able to sell devices at a premium price tag in the past. But it will be testing the value of its brand by asking consumers to spend hundreds more to buy its smart speaker.

Speaker: Apple has been playing up its audio quality as the biggest way its HomePod stands out from the pack. The company wants to remind you that this is a speaker and not just a box where Siri lives. (Though it is that, too.) Apple claims the device is also designed to make a variety of music sound good. That’s not always something its competitors can tout, though Google has touted the audio quality of the Home Max.

HomePod, like the Home Max, uses sound waves to map the room and adjusts its playback accordingly. For example, placing it on a table near the wall should make it play music differently than if you put it in the middle of the room.

The music lovers who will benefit most from HomePod of course will be Apple Music subscribers. Per Apple’s release, HomePod will be able to carry out lots of commands specifically from Apple Music’s 45 million song catalogue. HomePod can handle conversational questions  as well such as “Hey Siri, can you play something totally different?”

Still, HomePod will launch without an important audio feature: the ability to link HomePods together. That’s something Google and Amazon’s speakers can now do. Apple said this feature won’t be available until later this year.

What’s more, dedicated speaker companies such as Sonos are partnering with Google and Amazon to make smart speakers sound better. So Apple is facing plenty of competition as it tries to make a mark as a late entry into the market.

Intelligence: The HomePod should be able to do the things that Siri already does in your phone, such as set timers and check the weather. The speaker will have Apple’s connected home software built in, meaning it will be compatible with the light bulbs, thermostats and other devices that already work with your iPhone. It will also work with the Apple TV as a speaker for both music and video.

HomePod will also work as a speakerphone for your smartphone calls, either on speakerphone or using FaceTime’s audio chat.

But much of its intelligence will connected to the abilities of Siri, which is already under fire for not being as conversational or versatile as Google’s Assistant or Amazon’s Alexa.

Then there’s another big question: How well will Apple play with others to make non-Apple devices and software work with the HomePod?

Both Google and Amazon have aggressively courted other companies to beef up skills (both useful and playful) for their speakers: playing trivia games, ordering pizzas, meowing like a cat, etc.

Apple announced that HomePod will work with a few popular services such as Evernote and WhatsApp.  And it has strong ties with app developers thanks to its App Store. But the company will have to round up those relationships to a new platform to make the HomePod worth its high price.

https://www.raspberrypi.org/blog/pi-spy-alexa-skill/

RASPBERRY PI SPY’S ALEXA SKILL

With Raspberry Pi projects using home assistant services such as Amazon Alexaand Google Home becoming more and more popular, we invited Raspberry Pi maker Matt ‘Raspberry Pi Spy‘ Hawkins to write a guest post about his latest project, the Pi Spy Alexa Skill.

Pi Spy Alexa Skill Raspberry Pi

Pi Spy Skill

The Alexa system uses Skills to provide voice-activated functionality, and it allows you to create new Skills to add extra features. With the Pi Spy Skill, you can ask Alexa what function each pin on the Raspberry Pi’s GPIO header provides, for example by using the phrase “Alexa, ask Pi Spy what is Pin 2.” In response to a phrase such as “Alexa, ask Pi Spy where is GPIO 8”, Alexa can now also tell you on which pin you can find a specific GPIO reference number.

This information is already available in various forms, but I thought it would be useful to retrieve it when I was busy soldering or building circuits and had no hands free.

Creating an Alexa Skill

There is a learning curve to creating a new Skill, and in some regards it was similar to mobile app development.

A Skill consists of two parts: the first is created within the Amazon Developer Console and defines the structure of the voice commands Alexa should recognise. The second part is a webservice that can receive data extracted from the voice commands and provide a response back to the device. You can create the webservice on a webserver, internet-connected device, or cloud service.

I decided to use Amazon’s AWS Lambda service. Once set up, this allows you to write code without having to worry about the server it is running on. It also supports Python, so it fit in nicely with most of my other projects.

To get started, I logged into the Amazon Developer Console with my personal Amazon account and navigated to the Alexa section. I created a new Skill named Pi Spy. Within a Skill, you define an Intent Schema and some Sample Utterances. The schema defines individual intents, and the utterances define how these are invoked by the user.

Here is how my ExaminePin intent is defined in the schema:

Pi Spy Alexa Skill Raspberry Pi

Example utterances then attempt to capture the different phrases the user might speak to their device.

Pi Spy Alexa Skill Raspberry Pi

Whenever Alexa matches a spoken phrase to an utterance, it passes the name of the intent and the variable PinID to the webservice.

In the test section, you can check what JSON data will be generated and passed to your webservice in response to specific phrases. This allows you to verify that the webservices’ responses are correct.

Pi Spy Alexa Skill Raspberry Pi

Over on the AWS Services site, I created a Lambda function based on one of the provided examples to receive the incoming requests. Here is the section of that code which deals with the ExaminePin intent:

Pi Spy Alexa Skill Raspberry Pi

For this intent, I used a Python dictionary to match the incoming pin number to its description. Another Python function deals with the GPIO queries. A URL to this Lambda function was added to the Skill as its ‘endpoint’.

As with the Skill, the Python code can be tested to iron out any syntax errors or logic problems.

With suitable configuration, it would be possible to create the webservice on a Pi, and that is something I’m currently working on. This approach is particularly interesting, as the Pi can then be used to control local hardware devices such as cameras, lights, or pet feeders.

Note

My Alexa Skill is currently only available to UK users. I’m hoping Amazon will choose to copy it to the US service, but I think that is down to its perceived popularity, or it may be done in bulk based on release date. In the next update, I’ll be adding an American English version to help speed up this process.