Introduction to Vertex AI Workbench instances

Vertex AI Workbench instances are Jupyter notebook-based development environments for the entire data science workflow. You can interact with Vertex AI and other Google Cloud services from within a Vertex AI Workbench instance's Jupyter notebook.

Vertex AI Workbench integrations and features can make it easier to access your data, process data faster, schedule notebook runs, and more.

Vertex AI Workbench instances are prepackaged with JupyterLab and have a preinstalled suite of deep learning packages, including support for the TensorFlow and PyTorch frameworks. You can configure either CPU-only or GPU-enabled instances.

Vertex AI Workbench instances support the ability to sync with a GitHub repository. Vertex AI Workbench instances are protected by Google Cloud authentication and authorization.

Access to data

You can access your data without leaving the JupyterLab user interface.

In JupyterLab's navigation menu on a Vertex AI Workbench instance, you can use the Cloud Storage integration to browse data and other files that you have access to. See Access Cloud Storage buckets and files from within JupyterLab.

You can also use the BigQuery integration to browse tables that you have access to, write queries, preview results, and load data into your notebook. See Query data in BigQuery tables from within JupyterLab.

Execute notebook runs

Use the executor to run a notebook file as a one-time execution or on a schedule. Choose the specific environment and hardware that you want your execution to run on. Your notebook's code will run on Vertex AI custom training, which can make it easier to do distributed training, optimize hyperparameters, or schedule continuous training jobs.

You can use parameters in your execution to make specific changes to each run. For example, you might specify a different dataset to use, change the learning rate on your model, or change the version of the model.

You can also set a notebook to run on a recurring schedule. Even while your instance is shut down, Vertex AI Workbench will run your notebook file and save the results for you to look at and share with others.

Share insights

Executed notebook runs are stored in a Cloud Storage bucket, so you can share your insights with others by granting access to the results. See the previous section on executing notebook runs.

Secure your instance

You can deploy your Vertex AI Workbench instance with the default Google-managed network, which uses a default VPC network and subnet. Instead of the default network, you can specify a VPC network to use with your instance.

By default, Google Cloud automatically encrypts data when it is at rest using encryption keys managed by Google. If you have specific compliance or regulatory requirements related to the keys that protect your data, you can use customer-managed encryption keys (CMEK) with your Vertex AI Workbench instances. For more information, see Customer-managed encryption keys.

Automated shutdown for idle instances

To help manage costs, Vertex AI Workbench instances shut down after being idle for a specific time period by default. You can change the amount of time or turn this feature off. For more information, see Idle shutdown.

Add conda environments

Vertex AI Workbench instances use kernels based on conda environments. You can add a conda environment to your Vertex AI Workbench instance, and the environment appears as a kernel in your instance's JupyterLab interface.

Adding conda environments lets you use kernels that aren't available in the default Vertex AI Workbench instance. For example, you can add conda environments for R and Apache Beam. Or you can add conda environments for specific earlier versions of the available frameworks, such as TensorFlow, PyTorch, or Python.

For more information, see Add a conda environment.

Custom containers

You can create a Vertex AI Workbench instance based on a custom container. Start with a Google-provided base container image, and modify it for your needs. Then create an instance based on your custom container.

For more information, see Create an instance using a custom container.

Dataproc integration

You can process data quickly by running a notebook on a Dataproc cluster. After your cluster is set up, you can run a notebook file on it without leaving the JupyterLab user interface. For more information, see Create a Dataproc-enabled instance.

Create instances with third party credentials

You can create and manage Vertex AI Workbench instances with third party credentials provided by Workforce Identity Federation. Workforce Identity Federation uses your external identity provider (IdP) to grant a group of users access to Vertex AI Workbench instances through a proxy.

Access to a Vertex AI Workbench instance is granted by assigning a workforce pool principal to the Vertex AI Workbench instance's service account.

For more information, see Create an instance with third party credentials.

Tags for Vertex AI Workbench instances

The underlying VM of a Vertex AI Workbench instance is a Compute Engine VM. You can add and manage resource tags to your Vertex AI Workbench instance through its Compute Engine VM.

When you create a Vertex AI Workbench instance, Vertex AI Workbench attaches the Compute Engine resource tag vertex-workbench-instances:prod=READ_ONLY. This resource tag is only used for internal purposes.

To learn more about managing tags for Compute Engine instances, see Manage tags for resources.

Limitations

Consider the following limitations of Vertex AI Workbench instances when planning your project:

  • Third party JupyterLab extensions aren't supported.

  • When you use Access Context Manager and Chrome Enterprise Premium to protect Vertex AI Workbench instances with context-aware access controls, access is evaluated each time the user authenticates to the instance. For example, access is evaluated the first time the user accesses JupyterLab and whenever they access it thereafter if their web browser's cookie has expired.

  • Using a custom container that isn't derived from the Google-provided base container (gcr.io/deeplearning-platform-release/workbench-container:latest) increases the risks of compatibility issues with our services and isn't supported. Instead, modify the base container to create a custom container that meets your needs, and then create an instance using the custom container.

  • Although the use of custom virtual machine (VM) images or custom Deep Learning VM images with Vertex AI Workbench instances is possible, Vertex AI Workbench doesn't provide any support for unexpected behaviors or malfunctions on custom images.

  • The use of a user-managed notebooks image or managed notebooks image to create a Vertex AI Workbench instance isn't supported.

  • You can't edit the underlying VM of a Vertex AI Workbench instance by using the Google Cloud console or the Compute Engine API. To edit a Vertex AI Workbench instance's underlying VM, use the projects.locations.instances.patch method in the Notebooks API or the gcloud workbench instances update command in the Google Cloud SDK.

  • In instances that use VPC Service Controls, use of the executor isn't supported.

  • Modifying network tags of Vertex AI Workbench instances isn't supported.

  • To use accelerators with Vertex AI Workbench instances, the accelerator type that you want must be available in your instance's zone. To learn about accelerator availability by zone, see GPU regions and zones availability.

What's next