This page shows you how to make a batch prediction request to your trained AutoML classification or regression model using the Google Cloud console or the Vertex AI API.
A batch prediction request is an asynchronous request (as opposed to online prediction, which is a synchronous request). You request batch predictions directly from the model resource without needing to deploy the model to an endpoint. For tabular data, use batch predictions when you don't require an immediate response and want to process accumulated data by using a single request.
To make a batch prediction request, you specify an input source and an output format where Vertex AI stores predictions results.
Before you begin
Before you can make a batch prediction request, you must first train a model.
Input data
The input data for batch prediction requests is the data that your model uses to make predictions. For classification or regression models, you can provide input data in one of two formats:
- BigQuery tables
- CSV objects in Cloud Storage
We recommend that you use the same format for your input data as you used for training the model. For example, if you trained your model using data in BigQuery, it is best to use a BigQuery table as the input for your batch prediction. Because Vertex AI treats all CSV input fields as strings, mixing training and input data formats may cause errors.
Your data source must contain tabular data that includes all of the columns, in any order, that were used to train the model. You can include columns that were not in the training data, or that were in the training data but excluded from use for training. These extra columns are included in the output but don't affect the prediction results.
Input data requirements
BigQuery table
If you choose a BigQuery table as the input, you must ensure the following:
- BigQuery data source tables must be no larger than 100 GB.
- If the table is in a different project, you must grant the
BigQuery Data Editor
role to the Vertex AI service account in that project.
CSV file
If you choose a CSV object in Cloud Storage as the input, you must ensure the following:
- The data source must begin with a header row with the column names.
- Each data source object must not be larger than 10 GB. You can include multiple files, up to a maximum amount of 100 GB.
- If the Cloud Storage bucket is in a different project, you must grant the
Storage Object Creator
role to the Vertex AI service account in that project. - You must enclose all strings in double quotation marks (").
Output format
The output format of your batch prediction request doesn't need to be the same as the format that you used for the input. For example, if you used BigQuery table as the input, you can output the results to a CSV object in Cloud Storage.
Make a batch prediction request to your model
To make batch prediction requests, you can use the Google Cloud console or the Vertex AI API. The input data source can be CSV objects stored in a Cloud Storage bucket or BigQuery tables. Depending on the amount of data that you submit as input, a batch prediction task can take some time to complete.
Google Cloud console
Use the Google Cloud console to request a batch prediction.
- In the Google Cloud console, in the Vertex AI section, go to the Batch predictions page.
- Click Create to open the New batch prediction window.
- For Define your batch prediction, complete the following steps:
- Enter a name for the batch prediction.
- For Model name, select the name of the model to use for this batch prediction.
- For Version, select the model version to use for this batch prediction.
- For Select source, select whether your source input data is a CSV
file on Cloud Storage or a table in BigQuery.
- For CSV files, specify the Cloud Storage location where your CSV input file is located.
- For BigQuery tables, specify the project ID where the table is located, the BigQuery dataset ID, and the BigQuery table or view ID.
- For the Output, select CSV or BigQuery.
- For CSV, specify the Cloud Storage bucket where Vertex AI stores your output.
- For BigQuery, you can specify a project ID or an existing
dataset:
- To specify the project ID, enter the project ID in the Google Cloud project ID field. Vertex AI creates a new output dataset for you.
- To specify an existing dataset, enter its BigQuery path
in the Google Cloud project ID field, such as
bq://projectid.datasetid
.
- Optional.
You can request a prediction with explanations (also called feature
attributions) to see how your model arrived at a prediction. The local feature
importance values tell you how much each feature contributed to the prediction
result. Feature attributions are included in Vertex AI predictions through
Vertex Explainable AI.
To enable feature attributions, select Enable feature attributions for this model. This option is available if your output destination is BigQuery or JSONL on Cloud Storage. Feature attributions are not supported for CSV on Cloud Storage.
- Optional: Model Monitoring
analysis for batch predictions is available in Preview. See the
Prerequisites
for adding skew detection configuration to your batch
prediction job.
- Click to toggle on Enable model monitoring for this batch prediction.
- Select a Training data source. Enter the data path or location for the training data source that you selected.
- Optional: Under Alert thresholds, specify thresholds at which to trigger alerts.
- For Notification emails, enter one or more comma-separated email addresses to receive alerts when a model exceeds an alerting threshold.
- Optional: For Notification channels, add Cloud Monitoring channels to receive alerts when a model exceeds an alerting threshold. You can select existing Cloud Monitoring channels or create a new one by clicking Manage notification channels. The Console supports PagerDuty, Slack, and Pub/Sub notification channels.
- Click Create.
API: BigQuery
REST
You use the batchPredictionJobs.create method to request a batch prediction.
Before using any of the request data, make the following replacements:
- LOCATION_ID: Region where Model is stored and batch prediction job is executed. For
example,
us-central1
. - PROJECT_ID: Your project ID
- BATCH_JOB_NAME: Display name for the batch job
- MODEL_ID: The ID for the model to use for making predictions
-
INPUT_URI: Reference to the BigQuery data source. In the form:
bq://bqprojectId.bqDatasetId.bqTableId
-
OUTPUT_URI: Reference to the BigQuery destination (where the
predictions will be written). Specify the project ID and, optionally,
an existing dataset ID. If you specify just the project ID,
Vertex AI creates a new output dataset for you. Use the
following form:
bq://bqprojectId.bqDatasetId
- MACHINE_TYPE: The machine resources to be used for this batch prediction job. Learn more.
- STARTING_REPLICA_COUNT: The starting number of nodes for this batch prediction job. The node count can be increased or decreased as required by load, up to the maximum number of nodes, but will never fall below this number.
- MAX_REPLICA_COUNT: The maximum number of nodes for this batch prediction job. The node count can be increased or decreased as required by load, but will never exceed the maximum. Optional, defaults to 10.
-
GENERATE_EXPLANATION:
You can request a prediction with explanations (also called feature
attributions) to see how your model arrived at a prediction. The local feature
importance values tell you how much each feature contributed to the prediction
result. Feature attributions are included in Vertex AI predictions through
Vertex Explainable AI.
Default value is false. Set to true to enable feature attributions.
HTTP method and URL:
POST https://LOCATION_ID-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION_ID/batchPredictionJobs
Request JSON body:
{ "displayName": "BATCH_JOB_NAME", "model": "MODEL_ID", "inputConfig": { "instancesFormat": "bigquery", "bigquerySource": { "inputUri": "INPUT_URI" } }, "outputConfig": { "predictionsFormat": "bigquery", "bigqueryDestination": { "outputUri": "OUTPUT_URI" } }, "dedicatedResources": { "machineSpec": { "machineType": "MACHINE_TYPE", "acceleratorCount": "0" }, "startingReplicaCount": STARTING_REPLICA_COUNT, "maxReplicaCount": MAX_REPLICA_COUNT }, "generateExplanation": GENERATE_EXPLANATION }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION_ID-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION_ID/batchPredictionJobs"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION_ID-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION_ID/batchPredictionJobs" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
{ "name": "projects/PROJECT_ID/locations/LOCATION_ID/batchPredictionJobs/67890", "displayName": "batch_job_1 202005291958", "model": "projects/12345/locations/us-central1/models/5678", "state": "JOB_STATE_PENDING", "inputConfig": { "instancesFormat": "bigquery", "bigquerySource": { "inputUri": "INPUT_URI" } }, "outputConfig": { "predictionsFormat": "bigquery", "bigqueryDestination": { "outputUri": bq://12345 } }, "dedicatedResources": { "machineSpec": { "machineType": "n1-standard-32", "acceleratorCount": "0" }, "startingReplicaCount": 2, "maxReplicaCount": 6 }, "manualBatchTuningParameters": { "batchSize": 4 }, "generateExplanation": false, "outputInfo": { "bigqueryOutputDataset": "bq://12345.reg_model_2020_10_02_06_04 } "state": "JOB_STATE_PENDING", "createTime": "2020-09-30T02:58:44.341643Z", "updateTime": "2020-09-30T02:58:44.341643Z", }
Java
Before trying this sample, follow the Java setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Java API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
In the following sample, replace INSTANCES_FORMAT and PREDICTIONS_FORMAT with `bigquery`. To learn how to replace the other placeholders, see the `REST & CMD LINE` tab of this section.Python
To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.
In the following sample, set the `instances_format` and `predictions_format` parameters to `"bigquery"`. To learn how to set the other parameters, see the `REST & CMD LINE` tab of this section.API: Cloud Storage
REST
You use the batchPredictionJobs.create method to request a batch prediction.
Before using any of the request data, make the following replacements:
- LOCATION_ID: Region where Model is stored and batch prediction job is executed. For
example,
us-central1
. - PROJECT_ID: Your project ID
- BATCH_JOB_NAME: Display name for the batch job
- MODEL_ID: The ID for the model to use for making predictions
-
URI: Paths (URIs) to the Cloud Storage buckets containing the training data.
There can be more than one. Each URI has the form:
gs://bucketName/pathToFileName
-
OUTPUT_URI_PREFIX: Path to a Cloud Storage destination where the
predictions will be written. Vertex AI writes batch predictions to a timestamped
subdirectory of this path. Set this value to a string with the following format:
gs://bucketName/pathToOutputDirectory
- MACHINE_TYPE: The machine resources to be used for this batch prediction job. Learn more.
- STARTING_REPLICA_COUNT: The starting number of nodes for this batch prediction job. The node count can be increased or decreased as required by load, up to the maximum number of nodes, but will never fall below this number.
- MAX_REPLICA_COUNT: The maximum number of nodes for this batch prediction job. The node count can be increased or decreased as required by load, but will never exceed the maximum. Optional, defaults to 10.
-
GENERATE_EXPLANATION:
You can request a prediction with explanations (also called feature
attributions) to see how your model arrived at a prediction. The local feature
importance values tell you how much each feature contributed to the prediction
result. Feature attributions are included in Vertex AI predictions through
Vertex Explainable AI.
Default value is false. Set to true to enable feature attributions. This option is available only if your output destination is JSONL. Feature attributions are not supported for CSV on Cloud Storage.
HTTP method and URL:
POST https://LOCATION_ID-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION_ID/batchPredictionJobs
Request JSON body:
{ "displayName": "BATCH_JOB_NAME", "model": "MODEL_ID", "inputConfig": { "instancesFormat": "csv", "gcsSource": { "uris": [ URI1,... ] }, }, "outputConfig": { "predictionsFormat": "csv", "gcsDestination": { "outputUriPrefix": "OUTPUT_URI_PREFIX" } }, "dedicatedResources": { "machineSpec": { "machineType": "MACHINE_TYPE", "acceleratorCount": "0" }, "startingReplicaCount": STARTING_REPLICA_COUNT, "maxReplicaCount": MAX_REPLICA_COUNT }, "generateExplanation": GENERATE_EXPLANATION }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION_ID-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION_ID/batchPredictionJobs"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION_ID-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION_ID/batchPredictionJobs" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
{ "name": "projects/PROJECT__ID/locations/LOCATION_ID/batchPredictionJobs/67890", "displayName": "batch_job_1 202005291958", "model": "projects/12345/locations/us-central1/models/5678", "state": "JOB_STATE_PENDING", "inputConfig": { "instancesFormat": "csv", "gcsSource": { "uris": [ "gs://bp_bucket/reg_mode_test" ] } }, "outputConfig": { "predictionsFormat": "csv", "gcsDestination": { "outputUriPrefix": "OUTPUT_URI_PREFIX" } }, "dedicatedResources": { "machineSpec": { "machineType": "n1-standard-32", "acceleratorCount": "0" }, "startingReplicaCount": 2, "maxReplicaCount": 6 }, "manualBatchTuningParameters": { "batchSize": 4 } "outputInfo": { "gcsOutputDataset": "OUTPUT_URI_PREFIX/prediction-batch_job_1 202005291958-2020-09-30T02:58:44.341643Z" } "state": "JOB_STATE_PENDING", "createTime": "2020-09-30T02:58:44.341643Z", "updateTime": "2020-09-30T02:58:44.341643Z", }
Retrieve batch prediction results
Vertex AI sends the output of batch predictions to the destination that you specified, which can be either BigQuery or Cloud Storage.
BigQuery
Output dataset
If you are using BigQuery, the output of batch prediction is stored in an output dataset. If you had provided a dataset to Vertex AI, the name of the dataset (BQ_DATASET_NAME) is the name you had provided earlier. If you did not provide an output dataset, Vertex AI created one for you. You can find its name (BQ_DATASET_NAME) with the following steps:
- In the Google Cloud console, go to the Vertex AI Batch predictions page.
- Select the prediction you created.
-
The output dataset is given in Export location. The dataset name is
formatted as follows:
prediction_MODEL_NAME_TIMESTAMP
The output dataset contains one or more of the following three output tables:
-
Predictions table
This table contains a row for every row in your input data where a prediction was requested (that is, where TARGET_COLUMN_NAME = null).
-
Errors table
This table contains a row for each non-critical error encountered during batch prediction. Each non-critical error corresponds with a row in the input data that Vertex AI could not return a forecast for.
Predictions table
The name of the table (BQ_PREDICTIONS_TABLE_NAME) is formed by
appending `predictions_` with the timestamp of when the batch prediction
job started: predictions_TIMESTAMP
To retrieve predictions, go to the BigQuery page.
The format of the query depends on your model type:Classification:
SELECT predicted_TARGET_COLUMN_NAME.classes AS classes, predicted_TARGET_COLUMN_NAME.scores AS scores FROM BQ_DATASET_NAME.BQ_PREDICTIONS_TABLE_NAME
classes
is the list of potential classes, and scores
are the
corresponding confidence scores.
Regression:
SELECT predicted_TARGET_COLUMN_NAME.value, predicted_TARGET_COLUMN_NAME.lower_bound, predicted_TARGET_COLUMN_NAME.upper_bound FROM BQ_DATASET_NAME.BQ_PREDICTIONS_TABLE_NAME
If you enabled feature attributions, you can find them in the predictions table as well. To access attributions for a feature BQ_FEATURE_NAME, run the following query:
SELECT explanation.attributions[OFFSET(0)].featureAttributions.BQ_FEATURE_NAME FROM BQ_DATASET_NAME.BQ_PREDICTIONS_TABLE_NAME
Errors table
The name of the table (BQ_ERRORS_TABLE_NAME) is formed by appendingerrors_
with the timestamp of when the batch prediction job
started: errors_TIMESTAMP
To retrieve the errors validation table:
-
In the console, go to the BigQuery page.
-
Run the following query:
SELECT * FROM BQ_DATASET_NAME.BQ_ERRORS_TABLE_NAME
- errors_TARGET_COLUMN_NAME.code
- errors_TARGET_COLUMN_NAME.message
Cloud Storage
If you specified Cloud Storage as your output destination, the results of your batch prediction request are returned as CSV objects in a new folder in the bucket you specified. The name of the folder is the name of your model, prepended with "prediction-" and appended with the timestamp of when the batch prediction job started. You can find the Cloud Storage folder name in the Batch predictions tab for your model.
The Cloud Storage folder contains two kinds of objects:-
Prediction objects
The prediction objects are named `predictions_1.csv`, `predictions_2.csv`, and so on. They contain a header row with the column names, and a row for every prediction returned. In the prediction objects, Vertex AI returns your prediction data and creates one or more new columns for the prediction results based on your model type:
-
Classification: For each potential value of your target column, a
column named
TARGET_COLUMN_NAME_VALUE_score
is added to the results. This column contains the score, or confidence estimate, for that value. -
Regression: The predicted value for that row is returned in a column
named
predicted_TARGET_COLUMN_NAME
. The prediction interval is not returned for CSV output.
-
Classification: For each potential value of your target column, a
column named
-
Error objects
The error objects are named `errors_1.csv`, `errors_2.csv`, and so on. They contain a header row, and a row for every row in your input data that Vertex AI could not return a prediction (for example, if a non-nullable feature was null) for.
Note: If the results are large, it is split into multiple objects.
Feature attributions are not available for batch prediction results returned in Cloud Storage.
Interpret prediction results
Classification
Classification models return a confidence score.
The confidence score communicates how strongly your model associates each class or label with a test item. The higher the number, the higher the model's confidence that the label should be applied to that item. You decide how high the confidence score must be for you to accept the model's results.
Regression
Regression models return a prediction value. For BigQuery destinations, they also return a prediction interval. The prediction interval provides a range of values that the model has 95% confidence contain the actual result.
Interpret explanation results
If your batch prediction results are stored in BigQuery and you chose to enable feature attributions, you can find the feature attribution values in the predictions table.
To calculate local feature importance, first the baseline prediction score is calculated. Baseline values are computed from the training data, using the median value for numeric features and the mode for categorical features. The prediction generated from the baseline values is the baseline prediction score. Baseline values are calculated once for a model and do not change.
For a specific prediction, the local feature importance for each feature tells you how much that feature added to or subtracted from the result as compared with the baseline prediction score. The sum of all of the feature importance values equals the difference between the baseline prediction score and the prediction result.
For classification models, the score is always between 0.0 and 1.0, inclusive. Therefore, local feature importance values for classification models are always between -1.0 and 1.0 (inclusive).
For examples of feature attribution queries and to learn more, see Feature Attributions for Classification and Regression.What's next
- Learn how to export your model.