Published on March 5, 2018 by

Title: Large-Scale Deep Learning with TensorFlow

Date: Thursday, July 07, 2016
Time: 12:00 PM Eastern Daylight Time
Duration: 1 hour, 5 minutes

Over the past few years, we have built two large-scale computer systems for training neural networks, and then applied these systems to a wide variety of problems that have traditionally been very difficult for computers. We have made significant improvements in the state-of-the-art in many of these areas, and our software systems and algorithms have been used by dozens of different groups at Google to train state-of-the-art models for speech recognition, image recognition, various visual detection tasks, language modeling, language translation, and many other tasks. TensorFlow, our second system, is a platform for machine learning research and product deployment, and was released as an open-source project in November, 2015 (see tensorflow.org). I’ll then discuss ways in which we have applied our work to a variety of problems in Google’s products, usually in close collaboration with other teams.This talk describes joint work with many people at Google

Speaker Bio:
Jeff joined Google in 1999 and is currently a Google Senior Fellow in Google’s Research Group, where he leads Google’s deep learning research team in Mountain View, working on systems for speech recognition, computer vision, language understanding, and various predictive tasks. He has co-designed/implemented five generations of Google’s crawling, indexing, and query serving systems, and co-designed/implemented major pieces of Google’s initial advertising and AdSense for Content systems. He is also a co-designer and co-implementor of Google’s distributed computing infrastructure, including the MapReduce, BigTable and Spanner systems, protocol buffers, LevelDB, systems infrastructure for statistical machine translation, and a variety of internal and external libraries and developer tools. He is currently working on large-scale distributed systems for machine learning. He received a Ph.D. in Computer Science from the University of Washington in 1996, working with Craig Chambers on compiler techniques for object-oriented languages. He is a Fellow of the ACM and the AAAS, a member of the U.S. National Academy of Engineering, and a recipient of the Mark Weiser Award and the ACM-Infosys Foundation Award in the Computing Sciences.

Category Tag