Published on July 3, 2018 by

In this episode of TensorFlow Meets, we are joined by Chris Gottbrath from NVidia and X.Q. from the Google Brain team to talk about NVidia TensorRT. NVidia TensorRT is a high-performance, programmable inference accelerator that delivers low latency and high-throughput for deep learning applications. Developers can create neural networks and artificial intelligence to run their networks in productions or devices with the full performance that GPUs can offer. NVidia TensorRT allows developers to enjoy the diversity and flexibility of TensorFlow, while still having the high accuracy and performance that TensorRT provides. Watch to learn more and leave your questions for the TensorFlow team in the comments below!

Get started → https://developer.nvidia.com/tensorrt

Subscribe → http://bit.ly/TensorFlow1

Category Tag