Your ML model makes it into production - yet the job isn't done. The world changes, new data looks different from old, and eventually a model needs to be retrained. How can you tell when your model isn’t performing well and what can you do about it?
During this webinar we will explore how to detect model drift using MLflow and Apache Spark Streaming on Databricks, using IoT sensor data from glassware manufacturing to select products for manual quality inspection. We’ll highlight some subtle problems in online model evaluation, like connecting future ground-truth labels to new data.
Specifically, we will cover:
Clemens Mewald, Director of Product Management, Databricks
|
Joel Thomas, Senior Solutions Architect, Databricks
|