Every enterprise today wants to accelerate innovation by building AI into their business. However, most companies struggle with preparing large datasets for analytics, managing the proliferation of ML frameworks, and moving models in development to production.
In this workshop, we’ll cover best practices for enterprises to use powerful open source technologies to simplify and scale your ML efforts. We’ll discuss how to leverage Apache Spark™, the de-facto data processing and analytics engine in enterprises today, for data preparation as it unifies data at massive scale across various sources. You’ll also learn how to use ML frameworks (i.e. TensorFlow, XGBoost, Scikit-Learn, etc.) to train models based on different requirements. And finally, you can learn how to use MLflow to track experiment runs between multiple users within a reproducible environment, and manage the deployment of models to production on AWS Sagemaker.
Join this half-day workshop to learn how unified analytics can bring data science and engineering together to accelerate your ML efforts. This free workshop will give you the opportunity to:
- Learn how to build highly scalable and reliable pipelines for analytics
- Deeper insight into Apache Spark and Databricks, including the latest updates with Delta Lake
- Train a model against data and learn best practices for working with ML frameworks (i.e. - TensorFlow, XGBoost, Scikit-Learn, etc.)
- Learn about MLflow to track experiments, share projects and deploy models in the cloud with AWS Sagemaker
- Network and learn from your ML and Apache Spark peers
AGENDA AT A GLANCE
9:00-9:30 Registration & Networking
9:30-10:15 Unifying Data Science and Data Engineering
10:30-12:00 Hands-on Workshop: Preparing Data for Analytics with Delta Lake
12:00-13:00 Lunch will be provided
13:00-14:30 Hands-on workshop: Data Science powered by Databricks on AWS
15:00 End of Workshop
Space is limited for this event. Sign up today to reserve your spot!