As a Data Scientist, this is a common workflow: Train a model locally (in my Notebook), log the parameters, log the training time series metrics to Vertex AI TensorBoard, and log the evaluation metrics.
As a Data Scientist, I want to be able to reuse data pre-processing code that others within my company have written to simplify and standardize all the complex data wrangling that we do. I want to be able to:
- Use a Python data pre-processing library to clean up an in memory dataset (a Pandas Dataframe), in a notebook.
- Train a model using Keras (again in a notebook).
Notebook: Model experimentation with preprocessed data
In the "Build Vertex AI Experiments lineage for custom training" notebook, you'll learn how to integrate preprocessing code in Vertex AI Experiments. Also, you'll build the experiment lineage that lets you record, analyze, debug, and audit metadata and artifacts produced along your ML journey.
You can view the artifact lineage in the Google Cloud console.