You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Deployed a gemma-7b-it model on Vertex AI Model Garden using the "Deploy" button from the Gemma card. No additional tuning was done.
I have an instance running on a g2-standard-12 machine with I4 GPU. It is visible in the Online Prediction section of my Cloud Console.
I am able to reach the endpoint without any issues.
Unable to find any good documentation on what needs to be sent to the model and what to get back, I used the "Model Garden Gemma Deployment on Vertex" notebook to try and get an idea. It did provide an example for what to provide to the prompt:
However, it does not indicate what to expect for the reply. So it wasn't clear that the reply includes the original prompt as well as part of the reply string and this would need to be parsed out:
{
"predictions": [
"Prompt:\nWhat is a car?\nOutput:\nA car is a motor vehicle that is propelled by gasoline. It has four wheels, a steering wheel, and a seat."
],
"deployedModelId": "xxx",
"model": "projects/111/locations/us-central1/models/gemma-7b-it-google",
"modelDisplayName": "gemma-7b-it-google",
"modelVersionId": "1"
}
The documentation should make clear what the output will be.
The text was updated successfully, but these errors were encountered:
Thank you @afirstenberg for this feedback. We will work on clarifying the prediction response format. The example response listed above includes the original prompt ("Prompt: ... Output: ..."). You can control whether the response goes through extra formatting by setting raw_response to True or False in the request.
We have updated the notebook and clarified the prediction response format with the following instructions: "Set raw_response to True to obtain the raw model output. Set raw_response to False to apply additional formatting in the structure of "Prompt:\n{prompt.strip()}\nOutput:\n{output}"."
Marking the issue as closed as the question has been addressed in the notebook. Please reopen if there are any further questions, thank you!
Environment
Unable to find any good documentation on what needs to be sent to the model and what to get back, I used the "Model Garden Gemma Deployment on Vertex" notebook to try and get an idea. It did provide an example for what to provide to the prompt:
vertex-ai-samples/notebooks/community/model_garden/model_garden_gemma_deployment_on_vertex.ipynb
Line 621 in b37ed6e
However, it does not indicate what to expect for the reply. So it wasn't clear that the reply includes the original prompt as well as part of the reply string and this would need to be parsed out:
The documentation should make clear what the output will be.
The text was updated successfully, but these errors were encountered: