Talk held on Feb, 3rd 2016
“Four years ago we started the Google Brain project, a small effort to see if we could build training systems for large-scale deep neural networks and use these to make significant progress on various perceptual tasks. Since then, our software systems and algorithms have been used by dozens of different groups at Google to train state-of-the-art models for speech recognition, image recognition, various visual detection tasks, language modeling, search ranking, language translation, and various other tasks.
We have recently open-sourced TensorFlow, our second generation software system for developing and deploying models. In this talk, I’ll highlight some of the distributed systems and algorithms that we use in order to train large models quickly. I’ll then discuss ways in which we have applied this work to a variety of problems in Google’s products, usually in close collaboration with other teams.”