Scalable End-to-End Deep Learning Using TensorFlow™ and Databricks

On-Demand Webinar
Deep Learning has shown tremendous success, and as we all know, the more data the better the models. However, we eventually hit a bottleneck on how much data we can process on a single machine. This necessitates a new way of training neural networks: in a distributed manner.

In this webinar, we walk through how to use TensorFlow™ and Horovod (an open-source library from Uber to simplify distributed model training) on Databricks to build a more effective recommendation system at scale. We will cover:

  • The new Databricks Runtime for ML, shipped with pre-installed libraries such as Keras, Tensorflow, Horovod, and XGBoost to enable data scientists to get started with distributed Machine Learning more quickly
  • The newly-released HorovodEstimator API for distributed, multi-GPU training of deep learning models against data in Apache Spark™
  • How to make predictions at scale with deep learning pipelines

Presenters


brooke-wenig.jpeg
Brooke Wenig
Data Science Solutions Consultant at Databricks                                   


siddarth.jpeg
Siddarth Murching
Software Engineer at Databricks

Sign up today