As data volumes and their complexity continue to rapidly grow, the tools to process that data have not kept pace. The burden is put primarily on data engineering teams, who need to stitch together and maintain many different elements of code and supporting infrastructure to process data from source to destination, which is repeated across dozens of data sources and multiple stages of data processing. Data engineers develop ETL pipelines as individualized tasks, making it difficult to “stitch” together multiple sequences of the ETL pipeline.
With Delta Live Tables (DLT), data teams have an ETL framework that uses a simple declarative approach to building reliable data pipelines. DLT automatically manages your infrastructure at scale so you can spend less time on tooling and focus on getting value from data.
In this session, you will learn how to build and deploy a declarative streaming ETL pipeline at scale with DLT, how DTL automates complex and time-consuming tasks like task orchestration, error handling, recovery and auto-scaling with performance optimizations. Finally, we will show you how DLT enables data teams to deliver fresh, up-to-date data with built-in quality controls and monitoring ensuring accurate and useful BI, Data Science and ML.
- Get started with Delta Live Tables (DLT)
- Create DLT pipelines with SQL, using a Twitter stream and Hugging Face example
- Create Workflows with DLT
- Use DLT from Python with data ingestion from OSS Apache Kafka or Confluent Cloud
- 9:00 – 9:15 AM: Introduction to Delta Live Tables
- 9:15 – 10:30 AM: Delta Live Tables demo
- 10:30 – 10:35 AM: Q&A and Wrap up
Register today and reserve your spot!