Resource: Model
A trained machine learning Model.
name
string
The resource name of the Model.
versionId
string
Output only. Immutable. The version id of the model. A new version is committed when a new model version is uploaded or trained under an existing model id. It is an auto-incrementing decimal number in string representation.
versionAliases[]
string
user provided version aliases so that a model version can be referenced via alias (i.e. projects/{project}/locations/{location}/models/{modelId}@{version_alias}
instead of auto-generated version id (i.e. projects/{project}/locations/{location}/models/{modelId}@{versionId})
. The format is [a-z][a-zA-Z0-9-]{0,126}[a-z0-9] to distinguish from versionId. A default version alias will be created for the first version of the model, and there must be exactly one default version alias for a model.
Output only. timestamp when this version was created.
A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z"
and "2014-10-02T15:01:23.045123456Z"
.
Output only. timestamp when this version was most recently updated.
A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z"
and "2014-10-02T15:01:23.045123456Z"
.
displayName
string
Required. The display name of the Model. The name can be up to 128 characters long and can consist of any UTF-8 characters.
description
string
The description of the Model.
versionDescription
string
The description of this version.
The schemata that describe formats of the Model's predictions and explanations as given and returned via PredictionService.Predict
and PredictionService.Explain
.
metadataSchemaUri
string
Immutable. Points to a YAML file stored on Google Cloud Storage describing additional information about the Model, that is specific to it. Unset if the Model does not have any additional information. The schema is defined as an OpenAPI 3.0.2 Schema Object. AutoML Models always have this field populated by Vertex AI, if no additional metadata is needed, this field is set to an empty string. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
Immutable. An additional information about the Model; the schema of the metadata can be found in metadataSchema
. Unset if the Model does not have any additional information.
Output only. The formats in which this Model may be exported. If empty, this Model is not available for export.
trainingPipeline
string
Output only. The resource name of the TrainingPipeline that uploaded this Model, if any.
pipelineJob
string
Optional. This field is populated if the model is produced by a pipeline job.
Input only. The specification of the container that is to be used when deploying this Model. The specification is ingested upon ModelService.UploadModel
, and all binaries it contains are copied and stored internally by Vertex AI. Not required for AutoML Models.
artifactUri
string
Immutable. The path to the directory containing the Model artifact and any of its supporting files. Not required for AutoML Models.
Output only. When this Model is deployed, its prediction resources are described by the prediction_resources
field of the Endpoint.deployed_models
object. Because not all Models support all resource configuration types, the configuration types this Model supports are listed here. If no configuration types are listed, the Model cannot be deployed to an Endpoint
and does not support online predictions (PredictionService.Predict
or PredictionService.Explain
). Such a Model can serve predictions by using a BatchPredictionJob
, if it has at least one entry each in supportedInputStorageFormats
and supportedOutputStorageFormats
.
supportedInputStorageFormats[]
string
Output only. The formats this Model supports in BatchPredictionJob.input_config
. If PredictSchemata.instance_schema_uri
exists, the instances should be given as per that schema.
The possible formats are:
jsonl
The JSON Lines format, where each instance is a single line. usesGcsSource
.csv
The CSV format, where each instance is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. usesGcsSource
.tf-record
The TFRecord format, where each instance is a single record in tfrecord syntax. usesGcsSource
.tf-record-gzip
Similar totf-record
, but the file is gzipped. usesGcsSource
.bigquery
Each instance is a single row in BigQuery. usesBigQuerySource
.file-list
Each line of the file is the location of an instance to process, usesgcsSource
field of theInputConfig
object.
If this Model doesn't support any of these formats it means it cannot be used with a BatchPredictionJob
. However, if it has supportedDeploymentResourcesTypes
, it could serve online predictions by using PredictionService.Predict
or PredictionService.Explain
.
supportedOutputStorageFormats[]
string
Output only. The formats this Model supports in BatchPredictionJob.output_config
. If both PredictSchemata.instance_schema_uri
and PredictSchemata.prediction_schema_uri
exist, the predictions are returned together with their instances. In other words, the prediction has the original instance data first, followed by the actual prediction content (as per the schema).
The possible formats are:
jsonl
The JSON Lines format, where each prediction is a single line. usesGcsDestination
.csv
The CSV format, where each prediction is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. usesGcsDestination
.bigquery
Each prediction is a single row in a BigQuery table, usesBigQueryDestination
.
If this Model doesn't support any of these formats it means it cannot be used with a BatchPredictionJob
. However, if it has supportedDeploymentResourcesTypes
, it could serve online predictions by using PredictionService.Predict
or PredictionService.Explain
.
Output only. timestamp when this Model was uploaded into Vertex AI.
A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z"
and "2014-10-02T15:01:23.045123456Z"
.
Output only. timestamp when this Model was most recently updated.
A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z"
and "2014-10-02T15:01:23.045123456Z"
.
Output only. The pointers to DeployedModels created from this Model. Note that Model could have been deployed to endpoints in different Locations.
The default explanation specification for this Model.
The Model can be used for requesting explanation
after being deployed
if it is populated. The Model can be used for batch explanation
if it is populated.
All fields of the explanationSpec can be overridden by explanationSpec
of DeployModelRequest.deployed_model
, or explanationSpec
of BatchPredictionJob
.
If the default explanation specification is not set for this Model, this Model can still be used for requesting explanation
by setting explanationSpec
of DeployModelRequest.deployed_model
and for batch explanation
by setting explanationSpec
of BatchPredictionJob
.
etag
string
Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
labels
map (key: string, value: string)
The labels with user-defined metadata to organize your Models.
label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels.
Stats of data used for training or evaluating the Model.
Only populated when the Model is trained by a TrainingPipeline with data_input_config
.
Customer-managed encryption key spec for a Model. If set, this Model and all sub-resources of this Model will be secured by this key.
Output only. Source of a model. It can either be automl training pipeline, custom training pipeline, BigQuery ML, or saved and tuned from Genie or Model Garden.
Output only. If this Model is a copy of another Model, this contains info about the original.
metadataArtifact
string
Output only. The resource name of the Artifact that was created in MetadataStore when creating the Model. The Artifact resource name pattern is projects/{project}/locations/{location}/metadataStores/{metadataStore}/artifacts/{artifact}
.
Optional. user input field to specify the base model source. Currently it only supports specifing the Model Garden models and Genie models.
satisfiesPzs
boolean
Output only. reserved for future use.
satisfiesPzi
boolean
Output only. reserved for future use.
JSON representation |
---|
{ "name": string, "versionId": string, "versionAliases": [ string ], "versionCreateTime": string, "versionUpdateTime": string, "displayName": string, "description": string, "versionDescription": string, "predictSchemata": { object ( |
ExportFormat
Represents export format supported by the Model. All formats export to Google Cloud Storage.
id
string
Output only. The id of the export format. The possible format IDs are:
tflite
Used for Android mobile devices.edgetpu-tflite
Used for Edge TPU devices.tf-saved-model
A tensorflow model in SavedModel format.tf-js
A TensorFlow.js model that can be used in the browser and in Node.js using JavaScript.core-ml
Used for iOS mobile devices.custom-trained
A Model that was uploaded or trained by custom code.
Output only. The content of this Model that may be exported.
JSON representation |
---|
{
"id": string,
"exportableContents": [
enum ( |
ExportableContent
The Model content that can be exported.
Enums | |
---|---|
EXPORTABLE_CONTENT_UNSPECIFIED |
Should not be used. |
ARTIFACT |
Model artifact and any of its supported files. Will be exported to the location specified by the artifactDestination field of the ExportModelRequest.output_config object. |
IMAGE |
The container image that is to be used when deploying this Model. Will be exported to the location specified by the imageDestination field of the ExportModelRequest.output_config object. |
DeploymentResourcesType
Identifies a type of Model's prediction resources.
Enums | |
---|---|
DEPLOYMENT_RESOURCES_TYPE_UNSPECIFIED |
Should not be used. |
DEDICATED_RESOURCES |
Resources that are dedicated to the DeployedModel , and that need a higher degree of manual configuration. |
AUTOMATIC_RESOURCES |
Resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. |
SHARED_RESOURCES |
Resources that can be shared by multiple DeployedModels . A pre-configured DeploymentResourcePool is required. |
DeployedModelRef
Points to a DeployedModel.
endpoint
string
Immutable. A resource name of an Endpoint.
deployedModelId
string
Immutable. An id of a DeployedModel in the above Endpoint.
JSON representation |
---|
{ "endpoint": string, "deployedModelId": string } |
DataStats
Stats of data used for train or evaluate the Model.
Number of DataItems that were used for training this Model.
Number of DataItems that were used for validating this Model during training.
Number of DataItems that were used for evaluating this Model. If the Model is evaluated multiple times, this will be the number of test DataItems used by the first evaluation. If the Model is not evaluated, the number is 0.
Number of Annotations that are used for training this Model.
Number of Annotations that are used for validating this Model during training.
Number of Annotations that are used for evaluating this Model. If the Model is evaluated multiple times, this will be the number of test Annotations used by the first evaluation. If the Model is not evaluated, the number is 0.
JSON representation |
---|
{ "trainingDataItemsCount": string, "validationDataItemsCount": string, "testDataItemsCount": string, "trainingAnnotationsCount": string, "validationAnnotationsCount": string, "testAnnotationsCount": string } |
ModelSourceInfo
Detail description of the source information of the model.
type of the model source.
JSON representation |
---|
{
"sourceType": enum ( |
ModelSourceType
Source of the model. Different from objective
field, this ModelSourceType
enum indicates the source from which the model was accessed or obtained, whereas the objective
indicates the overall aim or function of this model.
Enums | |
---|---|
MODEL_SOURCE_TYPE_UNSPECIFIED |
Should not be used. |
AUTOML |
The Model is uploaded by automl training pipeline. |
CUSTOM |
The Model is uploaded by user or custom training pipeline. |
BQML |
The Model is registered and sync'ed from BigQuery ML. |
MODEL_GARDEN |
The Model is saved or tuned from Model Garden. |
CUSTOM_TEXT_EMBEDDING |
The Model is uploaded by text embedding finetuning pipeline. |
MARKETPLACE |
The Model is saved or tuned from Marketplace. |
OriginalModelInfo
Contains information about the original Model if this Model is a copy.
model
string
Output only. The resource name of the Model this Model is a copy of, including the revision. Format: projects/{project}/locations/{location}/models/{modelId}@{versionId}
JSON representation |
---|
{ "model": string } |
BaseModelSource
user input field to specify the base model source. Currently it only supports specifing the Model Garden models and Genie models.
source
Union type
source
can be only one of the following:Source information of Model Garden models.
Information about the base model of Genie models.
JSON representation |
---|
{ // source "modelGardenSource": { object ( |
ModelGardenSource
Contains information about the source of the models generated from Model Garden.
publicModelName
string
Required. The model garden source model resource name.
JSON representation |
---|
{ "publicModelName": string } |
GenieSource
Contains information about the source of the models generated from Generative AI Studio.
baseModelUri
string
Required. The public base model URI.
JSON representation |
---|
{ "baseModelUri": string } |
Methods |
|
---|---|
|
Copies an already existing Vertex AI Model into the specified Location. |
|
Deletes a Model. |
|
Deletes a Model version. |
|
Exports a trained, exportable Model to a location specified by the user. |
|
Gets a Model. |
|
Gets the access control policy for a resource. |
|
Lists Models in a Location. |
|
Lists versions of the specified model. |
|
Merges a set of aliases for a Model version. |
|
Updates a Model. |
|
Sets the access control policy on the specified resource. |
|
Returns permissions that a caller has on the specified resource. |
|
Incrementally update the dataset used for an examples model. |
|
Uploads a Model artifact into Vertex AI. |