Making Apache Spark™ Better with Delta Lake
Making Apache Spark™ Better With Delta Lake
Available On-Demand

Apache Spark™ is the dominant processing framework for big data. Delta Lake adds reliability to Spark so your analytics and machine learning initiatives have ready access to quality, reliable data. This webinar covers the use of Delta Lake to enhance data reliability for Spark environments.


  • The role of Apache Spark in big data processing
  • Use of data lakes as an important part of the data architecture
  • Data lake reliability challenges
  • How Delta Lake helps provide reliable data for Spark processing
  • Specific improvements improvements that Delta Lake adds
  • The ease of adopting Delta Lake for powering your data lake

Michael Armbrust, Principal Engineer, Databricks
Denny Lee, Product Marketing, Databricks

Watch Now