Thursday, 30 January 2020
WeWork BlueOne Square
246, Phase IV, Udyog Vihar

Gurgaon, Haryana, India - 122016

The widespread adoption of Apache Spark™, the first unified analytics engine, has helped data professionals make great strides in data science and machine learning. Yet, their upstream data lakes still face reliability challenges when it comes to building production data pipelines at scale to power these initiatives.

Delta Lake is an open source storage layer that brings reliability to data lakes. It has numerous reliability features including ACID transactions, scalable metadata handling, and unified streaming and batch data processing. It also offers DML commands to update, delete, and merge data for your data lifecycle, such as for GDPR/CCPA. Delta Lake runs on top of your existing data lake, such as on Azure Data Lake Storage, AWS S3, Hadoop HDFS, or on-premise, and is fully compatible with Apache Spark APIs.

Join this meetup to learn how Delta Lake can help you build robust production data pipelines at scale. This event will give you the opportunity to:

  • Gain an understanding of the Delta Lake open source project
  • Learn how to build highly scalable and reliable data pipelines using Delta Lake
  • See Delta Lake in action with a demo
  • Ask Databricks experts your most challenging data questions
  • Network and learn from your data engineering and data science peers


Agenda at a glance

5:30 - 6:00pm Registration & Networking
6:00 - 6.30pm Opening Remarks - Delta Lake Overview & Customer Cases
6.30 - 7.15pm Delta Lake: demo
7:15 - 7.45pm Break
7.45 - 8:15pm Productionizing ML with Delta Lake
8:15 - 8.30pm Q&A




Space is limited for this event. Sign up today to reserve your spot!

Please fill out the form to confirm your spot