Google - Professional Cloud Architect.v2022 03 05.q87

Download as pdf or txt
Download as pdf or txt
You are on page 1of 39

Google.Professional-Cloud-Architect.v2022-03-05.

q87

Exam Code: Professional-Cloud-Architect


Exam Name: Google Certified Professional - Cloud Architect (GCP)
Certification Provider: Google
Free Question Number: 87
Version: v2022-03-05
# of views: 104
# of Questions views: 917
https://www.freecram.net/torrent/Google.Professional-Cloud-Architect.v2022-03-05.q87.html

NEW QUESTION: 1
For this question, refer to the Helicopter Racing League (HRL) case study. Recently HRL started a new
regional racing league in Cape Town, South Afric a. In an effort to give customers in Cape Town a better
user experience, HRL has partnered with the Content Delivery Network provider, Fastly. HRL needs to
allow traffic coming from all of the Fastly IP address ranges into their Virtual Private Cloud network (VPC
network). You are a member of the HRL security team and you need to configure the update that will
allow only the Fastly IP address ranges through the External HTTP(S) load balancer. Which command
should you use?
A. glouc compute firewall rules update hlr-policy \
--priority 1000 \
target tags-sourceiplist fastly \
--allow tcp:443
B. gcloud compute security policies rules update 1000 \
--security-policy hlr-policy \
--expression "evaluatePreconfiguredExpr('sourceiplist-fastly')" \
--action " allow"
C. gcloud compute firewall rules update
sourceiplist-fastly \
priority 1000 \
allow tcp: 443
D. gcloud compute priority-policies rules update
1000 \
security policy from fastly
--src- ip-ranges"
Answer: (SHOW ANSWER)
-- action " allow"
Reference:
D18912E1457D5D1DDCBD40AB3BF70D5D

NEW QUESTION: 2
Your development teams release new versions of games running on Google Kubernetes Engine (GKE)
daily.
You want to create service level indicators (SLIs) to evaluate the quality of the new versions from the
user's perspective. What should you do?
A. Create Request Latency and Error Rate as service level indicators.
B. Create GKE CPU Utilization and Memory Utilization as service level indicators.
C. Create CPU Utilization and Request Latency as service level indicators.
D. Create Server Uptime and Error Rate as service level indicators.
Answer: C (LEAVE A REPLY)

NEW QUESTION: 3
For this question, refer to the Helicopter Racing League (HRL) case study. HRL is looking for a cost-
effective approach for storing their race data such as telemetry. They want to keep all historical records,
train models using only the previous season's data, and plan for data growth in terms of volume and
information collected.
You need to propose a data solution. Considering HRL business requirements and the goals expressed
by CEO S.
Hawke, what should you do?
A. Use Cloud SQL for its ability to automatically manage storage increases and compatibility with
MySQL. Use separate database instances for each season.
B. Use Cloud Spanner for its scalability and ability to version schemas with zero downtime. Split race
data using season as a primary key.
C. Use BigQuery for its scalability and ability to add columns to a schema. Partition race data based on
season.
D. Use Firestore for its scalable and flexible document-based database. Use collections to aggregate
race data by season and event.
Answer: (SHOW ANSWER)

NEW QUESTION: 4
You need to design a solution for global load balancing based on the URL path being requested. You
need to ensure operations reliability and end-to-end in-transit encryption based on Google best
practices.
What should you do?
A. Create a cross-region load balancer with URL Maps.
B. Create an HTTPS load balancer with URL maps.
C. Create appropriate instance groups and instances. Configure SSL proxy load balancing.
D. Create a global forwarding rule. Configure SSL proxy balancing.
Answer: (SHOW ANSWER)
Reference https://cloud.google.com/load-balancing/docs/https/url-map

NEW QUESTION: 5
For this question, refer to the Dress4Win case study.
Dress4Win has configured a new uptime check with Google Stackdriver for several of their legacy
services. The Stackdriver dashboard is not reporting the services as healthy. What should they do?
A. In the Cloud Platform Console download the list of the uptime servers' IP addresses and create an
inbound firewall rule
B. Configure their load balancer to pass through the User-Agent HTTP header when the value matches
GoogleStackdriverMonitoring-UptimeChecks (https://cloud.google.com/monitoring)
C. Install the Stackdriver agent on all of the legacy web servers.
D. Configure their legacy web servers to allow requests that contain user-Agent HTTP header when the
value matches GoogleStackdriverMonitoring- UptimeChecks (https://cloud.google.com/monitoring)
Answer: (SHOW ANSWER)

NEW QUESTION: 6
You are deploying an application on App Engine that needs to integrate with an on-premises database.
For security purposes, your on-premises database must not be accessible through the public Internet.
What should you do?
A. Deploy your application on App Engine standard environment and use App Engine firewall rules to
limit access to the open on-premises database.
B. Deploy your application on App Engine standard environment and use Cloud VPN to limit access to
the onpremises database.
C. Deploy your application on App Engine flexible environment and use App Engine firewall rules to limit
access to the on-premises database.
D. Deploy your application on App Engine flexible environment and use Cloud VPN to limit access to the
on-premises database.
Answer: (SHOW ANSWER)
https://cloud.google.com/appengine/docs/flexible/python/using-third-party-databases

NEW QUESTION: 7
Your web application must comply with the requirements of the European Union's General Data
Protection Regulation (GDPR). You are responsible for the technical architecture of your web
application. What should you do?
A. Ensure that your web application only uses native features and services of Google Cloud Platform,
because Google already has various certifications and provides "pass-on" compliance when you use
native features.
B. Enable the relevant GDPR compliance setting within the GCPConsole for each of the services in use
within your application.
C. Ensure that Cloud Security Scanner is part of your test planning strategy in order to pick up any
compliance gaps.
D. Define a design for the security of data in your web application that meets GDPR requirements.
Answer: (SHOW ANSWER)
https://cloud.google.com/security/gdpr/?tab=tab4

NEW QUESTION: 8
For this question, refer to the JencoMart case study.
JencoMart wants to move their User Profiles database to Google Cloud Platform. Which Google
Database should they use?
A. Cloud Spanner
B. Google BigQuery
C. Google Cloud SQL
D. Google Cloud Datastore
Answer: (SHOW ANSWER)
https://cloud.google.com/datastore/docs/concepts/overview
Common workloads for Google Cloud Datastore:
User profiles
Product catalogs
Game state
References: https://cloud.google.com/storage-options/
https://cloud.google.com/datastore/docs/concepts/overview

NEW QUESTION: 9
You want your Google Kubernetes Engine cluster to automatically add or remove nodes based on
CPUload. What should you do?
A. Create a deployment and set the maxUnavailable and maxSurge properties. Enable the Cluster
Autoscaler using the gcloud command.
B. Configure a HorizontalPodAutoscaler with a target CPU usage. Enable autoscaling on the managed
instance group for the cluster using the gcloud command.
C. Create a deployment and set the maxUnavailable and maxSurge properties. Enable autoscaling on
the cluster managed instance group from the GCP Console.
D. Configure a HorizontalPodAutoscaler with a target CPU usage. Enable the Cluster Autoscaler from
the GCP Console.
Answer: (SHOW ANSWER)

NEW QUESTION: 10
A. Create a tokenizer service and store only tokenized data.
B. Create separate projects that only process credit card data.
C. Create separate subnetworks and isolate the components that process credit card data.
D. Streamline the audit discovery phase by labeling all of the virtual machines (VMs) that process PCI
data.
E. Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with
the auditor.
Answer: A (LEAVE A REPLY)
https://cloud.google.com/solutions/pci-dss-compliance-in-gcp

NEW QUESTION: 11
You need to deploy an application on Google Cloud that must run on a Debian Linux environment. The
application requires extensive configuration in order to operate correctly. You want to ensure that you
can install Debian distribution updates with minimal manual intervention whenever they become
available. What should you do?
A. Create a Compute Engine instance template using the most recent Debian image. Create an instance
from this template, and install and configure the application as part of the startup script. Repeat this
process whenever a new Google-managed Debian image becomes available.
B. Create a Debian-based Compute Engine instance, install and configure the application, and use OS
patch management to install available updates.
C. Create an instance with the latest available Debian image. Connect to the instance via SSH, and
install and configure the application on the instance. Repeat this process whenever a new Google-
managed Debian image becomes available.
D. Create a Docker container with Debian as the base image. Install and configure the application as
part of the Docker image creation process. Host the container on Google Kubernetes Engine and restart
the container whenever a new update is available.
Answer: (SHOW ANSWER)

NEW QUESTION: 12
The development team has provided you with a Kubernetes Deployment file. You have no infrastructure
yet and need to deploy the application. What should you do?
A. Use gcloud to create a Kubernetes cluster. Use Deployment Manager to create the deployment.
B. Use gcloud to create a Kubernetes cluster. Use kubect1 to create the deployment.
C. Use kubect1 to create a Kubernetes cluster. Use Deployment Manager to create the deployment.
D. Use kubect1 to create a Kubernetes cluster. Use kubect1 to create the deployment.
Answer: (SHOW ANSWER)
https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster

NEW QUESTION: 13
You are developing your microservices application on Google Kubernetes Engine. During testing, you
want to validate the behavior of your application in case a specific microservice should suddenly crash.
What should you do?
A. Add a taint to one of the nodes of the Kubernetes cluster. For the specific microservice, configure a
pod anti-affinity label that has the name of the tainted node as a value.
B. Use Istio's fault injection on the particular microservice whose faulty behavior you want to simulate.
C. Destroy one of the nodes of the Kubernetes cluster to observe the behavior.
D. Configure Istio's traffic management features to steer the traffic away from a crashing microservice.
Answer: (SHOW ANSWER)
Microservice runs on all nodes. The Micro service runs on Pod, Pod runs on Nodes. Nodes is nothing but
Virtual machines. Once deployed the application microservices will get deployed across all Nodes.
Destroying one node may not mimic the behaviour of microservice crashing as it may be running in other
nodes.
link: https://istio.io/latest/docs/tasks/traffic-management/fault-injection/

NEW QUESTION: 14
For this question, refer to the Mountkirk Games case study.
Mountkirk Games wants to set up a real-time analytics platform for their new game. The new platform
must meet their technical requirements. Which combination of Google technologies will meet all of their
requirements?
A. Container Engine, Cloud Pub/Sub, and Cloud SQL
B. Cloud Dataflow, Cloud Storage, Cloud Pub/Sub, and BigQuery
C. Cloud SQL, Cloud Storage, Cloud Pub/Sub, and Cloud Dataflow
D. Cloud Dataproc, Cloud Pub/Sub, Cloud SQL, and Cloud Dataflow
E. Cloud Pub/Sub, Compute Engine, Cloud Storage, and Cloud Dataproc
Answer: (SHOW ANSWER)
A real time requires Stream / Messaging so Pub/Sub, Analytics by Big Query.
Ingest millions of streaming events per second from anywhere in the world with Cloud Pub/Sub, powered
by Google's unique, high-speed private network. Process the streams with Cloud Dataflow to ensure
reliable, exactly-once, low-latency data transformation. Stream the transformed data into BigQuery, the
cloud-native data warehousing service, for immediate analysis via SQL or popular visualization tools.
From scenario: They plan to deploy the game's backend on Google Compute Engine so they can
capture streaming metrics, run intensive analytics.
Requirements for Game Analytics Platform
Dynamically scale up or down based on game activity
Process incoming data on the fly directly from the game servers
Process data that arrives late because of slow mobile networks
Allow SQL queries to access at least 10 TB of historical data
Process files that are regularly uploaded by users' mobile devices
Use only fully managed services
References: https://cloud.google.com/solutions/big-data/stream-analytics/ Company Overview
TerramEarth manufactures heavy equipment for the mining and agricultural industries: About 80% of
their business is from mining and 20% from agriculture. They currently have over 500 dealers and
service centers in 100 countries. Their mission is to build products that make their customers more
productive.
Company Background
TerramEarth formed in 1946, when several small, family owned companies combined to retool after
World War II. The company cares about their employees and customers and considers them to be
extended members of their family.
TerramEarth is proud of their ability to innovate on their core products and find new markets as their
customers' needs change. For the past 20 years trends in the industry have been largely toward
increasing productivity by using larger vehicles with a human operator.

NEW QUESTION: 15
Your operations team has asked you to help diagnose a performance issue in a production application
that runs on Compute Engine. The application is dropping requests that reach it when under heavy load.
The process list for affected instances shows a single application process that is consuming all available
CPU, and autoscaling has reached the upper limit of instances. There is no abnormal load on any other
related systems, including the database. You want to allow production traffic to be served again as
quickly as possible. Which action should you recommend?
A. Increase the maximum number of instances in the autoscaling group.
B. Restart the affected instances on a staggered schedule.
C. SSH to each instance and restart the application process.
D. Change the autoscaling metric to agent.googleapis.com/memory/percent_used.
Answer: (SHOW ANSWER)

NEW QUESTION: 16
Your company wants you to build a highly reliable web application with a few public APIs as the
backend. You don't expect a lot of user traffic, but traffic could spike occasionally. You want to leverage
Cloud Load Balancing, and the solution must be cost-effective for users. What should you do?
A. Store static content such as HTML and images in Cloud CDN. Host the APIs on App Engine and store
the user data in Cloud SQL.
B. Store static content such as HTML and images in a Cloud Storage bucket. Host the APIs on a zonal
Google Kubernetes Engine cluster with worker nodes in multiple zones, and save the user data in Cloud
Spanner.
C. Store static content such as HTML and images in Cloud CDN. Use Cloud Run to host the APIs and
save the user data in Cloud SQL.
D. Store static content such as HTML and images in a Cloud Storage bucket. Use Cloud Functions to
host the APIs and save the user data in Firestore.
Answer: (SHOW ANSWER)
https://cloud.google.com/load-balancing/docs/https/setting-up-https-serverless#gcloud:-cloud-functions
https://cloud.google.com/blog/products/networking/better-load-balancing-for-app-engine-cloud-run-and-
functions
Valid Professional-Cloud-Architect Dumps shared by Fast2test.com for Helping Passing
Professional-Cloud-Architect Exam! Fast2test.com now offer the newest Professional-Cloud-
Architect exam dumps, the Fast2test.com Professional-Cloud-Architect exam questions have been
updated and answers have been corrected get the newest Fast2test.com Professional-Cloud-
Architect dumps with Test Engine here: https://www.fast2test.com/Professional-Cloud-Architect-
premium-file.html (251 Q&As Dumps, 30%OFF Special Discount: freecram)

NEW QUESTION: 17
Your customer is moving their corporate applications to Google Cloud Platform. The security team wants
detailed visibility of all projects in the organization. You provision the Google Cloud Resource Manager
and set up yourself as the org admin. What Google Cloud Identity and Access Management (Cloud IAM)
roles should you give to the security team'?
A. Org viewer, project owner
B. Org viewer, project viewer
C. Org admin, project browser
D. Project owner, network admin
Answer: (SHOW ANSWER)
https://cloud.google.com/iam/docs/using-iam-securely

NEW QUESTION: 18
Your company wants to try out the cloud with low risk. They want to archive approximately 100 TB of
their log data to the cloud and test the analytics features available to them there, while also retaining that
data as a long-term disaster recovery backup. Which two steps should they take? Choose 2 answers
A. Upload log files into Google Cloud Storage.
B. Load logs into Google Cloud SQL.
C. Load logs into Google BigQuery.
D. Insert logs into Google Cloud Bigtable.
E. Import logs into Google Stackdriver.
Answer: A,C (LEAVE A REPLY)

NEW QUESTION: 19
Your company pushes batches of sensitive transaction data from its application server VMs to Cloud
Pub/Sub for processing and storage. What is the Google-recommended way for your application to
authenticate to the required Google Cloud services?
A. Ensure that VM service accounts are granted the appropriate Cloud Pub/Sub IAM roles.
B. Create a gateway to Cloud Pub/Sub using a Cloud Function, and grant the Cloud Function service
account the appropriate Cloud Pub/Sub IAM roles.
C. Ensure that VM service accounts do not have access to Cloud Pub/Sub, and use VM access scopes
to grant the appropriate Cloud Pub/Sub IAM roles.
D. Generate an OAuth2 access token for accessing Cloud Pub/Sub, encrypt it, and store it in Cloud
Storage for access from each VM.
Answer: (SHOW ANSWER)

NEW QUESTION: 20
A. Create a distribution list of all customers to inform them of an upcoming backward-incompatible
change at least one month before replacing the old API with the new API.
B. Create an automated process to generate API documentation, and update the public API
documentation as part of the CI/CD process when deploying an update to the API.
C. Use a versioning strategy for the APIs that increases the version number on every backward-
incompatible change.
D. Use a versioning strategy for the APIs that adds the suffix "DEPRECATED" to the current API version
number on every backward-incompatible change. Use the current version number for the new API.
Answer: C (LEAVE A REPLY)
https://cloud.google.com/apis/design/versioning
All Google API interfaces must provide a major version number, which is encoded at the end of the
protobuf package, and included as the first part of the URI path for REST APIs. If an API introduces a
breaking change, such as removing or renaming a field, it must increment its API version number to
ensure that existing user code does not suddenly break.

NEW QUESTION: 21
Your company has a support ticketing solution that uses App Engine Standard. The project that contains
the App Engine application already has a Virtual Private Cloud(VPC) network fully connected to the
company's on-premises environment through a Cloud VPN tunnel. You want to enable App Engine
application to communicate with a database that is running in the company's on-premises environment.
What should you do?
A. Configure private services access
B. Configure private Google access for on-premises hosts only
C. Configure serverless VPC access
D. Configure private Google access
Answer: (SHOW ANSWER)
https://cloud.google.com/appengine/docs/standard/python3/connecting-vpc
https://cloud.google.com/appengine/docs/flexible/python/using-third-party-databases#on_premises

NEW QUESTION: 22
You are moving an application that uses MySQL from on-premises to Google Cloud. The application will
run on Compute Engine and will use Cloud SQL. You want to cut over to the Compute Engine
deployment of the application with minimal downtime and no data loss to your customers. You want to
migrate the application with minimal modification. You also need to determine the cutover strategy. What
should you do?
A. 1. Set up Cloud VPN to provide private network connectivity between the Compute Engine application
and the on-premises MySQL server.
2. Stop the on-premises application.
3. Create a mysqldump of the on-premises MySQL server.
4. Upload the dump to a Cloud Storage bucket.
5. Import the dump into Cloud SQL.
6. Modify the source code of the application to write queries to both databases and read from its local
database.
7. Start the Compute Engine application.
8. Stop the on-premises application.
B. 1. Set up Cloud SQL proxy and MySQL proxy.
2. Create a mysqldump of the on-premises MySQL server.
3. Upload the dump to a Cloud Storage bucket.
4. Import the dump into Cloud SQL.
5. Stop the on-premises application.
6. Start the Compute Engine application.
C. 1. Set up Cloud VPN to provide private network connectivity between the Compute Engine application
and the on-premises MySQL server.
2. Stop the on-premises application.
3. Start the Compute Engine application, configured to read and write to the on-premises MySQL server.
4. Create the replication configuration in Cloud SQL.
5. Configure the source database server to accept connections from the Cloud SQL replica.
6. Finalize the Cloud SQL replica configuration.
7. When replication has been completed, stop the Compute Engine application.
8. Promote the Cloud SQL replica to a standalone instance.
9. Restart the Compute Engine application, configured to read and write to the Cloud SQL standalone
instance.
D. 1. Stop the on-premises application.
2. Create a mysqldump of the on-premises MySQL server.
3. Upload the dump to a Cloud Storage bucket.
4. Import the dump into Cloud SQL.
5. Start the application on Compute Engine.
Answer: (SHOW ANSWER)
External replica promotion migration In the migration strategy of external replica promotion, you create
an external database replica and synchronize the existing data to that replica. This can happen with
minimal downtime to the existing database. When you have a replica database, the two databases have
different roles that are referred to in this document as primary and replica. After the data is synchronized,
you promote the replica to be the primary in order to move the management layer with minimal impact to
database uptime. In Cloud SQL, an easy way to accomplish the external replica promotion is to use the
automated migration workflow. This process automates many of the steps that are needed for this type
of migration.
https://cloud.google.com/architecture/migrating-mysql-to-cloudsql-concept
- The best option for migrating your MySQL database is to use an external replica promotion. In this
strategy, you create a replica database and set your existing database as the primary. You wait until the
two databases are in sync, and you then promote your MySQL replica database to be the primary. This
process minimizes database downtime related to the database migration. -
https://cloud.google.com/architecture/migrating-mysql-to-cloudsql-
concept#external_replica_promotion_migration

NEW QUESTION: 23
Your team will start developing a new application using microservices architecture on Kubernetes
Engine. As part of the development lifecycle, any code change that has been pushed to the remote
develop branch on your GitHub repository should be built and tested automatically. When the build and
test are successful, the relevant microservice will be deployed automatically in the development
environment. You want to ensure that all code deployed in the development environment follows this
process. What should you do?
A. Have each developer install a pre-commit hook on their workstation that tests the code and builds the
container when committing on the development branch. After a successful commit, have the developer
deploy the newly built container image on the development cluster.
B. Install a post-commit hook on the remote git repository that tests the code and builds the container
when code is pushed to the development branch. After a successful commit, have the developer deploy
the newly built container image on the development cluster.
C. Create a Cloud Build trigger based on the development branch that tests the code, builds the
container, and stores it in Container Registry. Create a deployment pipeline that watches for new images
and deploys the new image on the development cluster. Ensure only the deployment tool has access to
deploy new versions.
D. Create a Cloud Build trigger based on the development branch to build a new container image and
store it in Container Registry. Rely on Vulnerability Scanning to ensure the code tests succeed. As the
final step of the Cloud Build process, deploy the new container image on the development cluster.
Ensure only Cloud Build has access to deploy new versions.
Answer: (SHOW ANSWER)
https://cloud.google.com/container-registry/docs/overview
Create a Cloud Build trigger based on the development branch that tests the code, builds the container,
and stores it in Container Registry. Create a deployment pipeline that watches for new images and
deploys the new image on the development cluster. Ensure only the deployment tool has access to
deploy new versions.

NEW QUESTION: 24
Your solution is producing performance bugs in production that you did not see in staging and test
environments. You want to adjust your test and deployment procedures to avoid this problem in the
future. What should you do?
A. Deploy smaller changes to production.
B. Deploy changes to a small subset of users before rolling out to production.
C. Increase the load on your test and staging environments.
D. Deploy fewer changes to production.
Answer: (SHOW ANSWER)

NEW QUESTION: 25
For this question, refer to the Dress4Win case study.
Dress4Win has asked you for advice on how to migrate their on-premises MySQL deployment to the
cloud. They want to minimize downtime and performance impact to their on-premises solution during the
migration. Which approach should you recommend?
A. Create a new MySQL cluster in the cloud, configure applications to begin writing to both on-premises
and cloud MySQL masters, and destroy the original cluster at cutover.
B. Create a dump of the on-premises MySQL master server, and then shut it down, upload it to the cloud
environment, and load into a new MySQL cluster.
C. Setup a MySQL replica server/slave in the cloud environment, and configure it for asynchronous
replication from the MySQL master server on-premises until cutover.
D. Create a dump of the MySQL replica server into the cloud environment, load it into: Google Cloud
Datastore, and configure applications to read/write to Cloud Datastore at cutover.
Answer: (SHOW ANSWER)

NEW QUESTION: 26
You deploy your custom java application to google app engine.
It fails to deploy and gives you the following stack trace:

A. Recompile the CLoakedServlet class using and MD5 hash instead of SHA1
B. Digitally sign all of your JAR files and redeploy your application.
C. Upload missing JAR files and redeploy your application
Answer: B (LEAVE A REPLY)

NEW QUESTION: 27
You need to evaluate your team readiness for a new GCP project. You must perform the evaluation and
create a skills gap plan incorporates the business goal of cost optimization. Your team has deployed two
GCP projects successfully to date. What should you do?
A. Allocate budget for team training. Set a deadline for the new GCP project.
B. Allocate budget for team training. Create a roadmap for your team to achieve Google Cloud
certification based on job role.
C. Allocate budget to hire skilled external consultants. Set a deadline for the new GCP project.
D. Allocate budget to hire skilled external consultants. Create a roadmap for your team to achieve
Google Cloud certification based on job role.
Answer: (SHOW ANSWER)
https://services.google.com/fh/files/misc/cloud_center_of_excellence.pdf

NEW QUESTION: 28
Your company has just acquired another company, and you have been asked to integrate their existing
Google Cloud environment into your company's data center. Upon investigation, you discover that some
of the RFC 1918 IP ranges being used in the new company's Virtual Private Cloud (VPC) overlap with
your data center IP space. What should you do to enable connectivity and make sure that there are no
routing conflicts when connectivity is established?
A. Create a Cloud VPN connection from the new VPC to the data center, create a Cloud Router, and
apply new IP addresses so there is no overlapping IP space.
B. Create a Cloud VPN connection from the new VPC to the data center, and create a Cloud NAT
instance to perform NAT on the overlapping IP space.
C. Create a Cloud VPN connection from the new VPC to the data center, create a Cloud Router, and
apply a custom route advertisement to block the overlapping IP space.
D. Create a Cloud VPN connection from the new VPC to the data center, and apply a firewall rule that
blocks the overlapping IP space.
Answer: (SHOW ANSWER)
To connect two networks together we need (1) either VPN or interconnect and (2) peering. When there is
peering, you cannot have conflicting IP addresses. You can use either Cloud VPN or Cloud Interconnect
to securely connect your on-premises network to your VPC network.
(https://cloud.google.com/vpc/docs/vpc-peering#transit-network) At the time of peering, Google Cloud
checks to see if there are any subnet IP ranges that overlap subnet IP ranges in the other network. If
there is any overlap, peering is not established. (https://cloud.google.com/vpc/docs/vpc-
peering#considerations) NAT is used to translate private to public IP and vice versa, however because
we are connecting 2 networks together, they become private IPs. So it is not applicable.

NEW QUESTION: 29
For this question refer to the TerramEarth case study
Operational parameters such as oil pressure are adjustable on each of TerramEarth's vehicles to
increase their efficiency, depending on their environmental conditions. Your primary goal is to increase
the operating efficiency of all 20 million cellular and unconnected vehicles in the field How can you
accomplish this goal?
A. Have your engineers inspect the data for patterns, and then create an algorithm with rules that make
operational adjustments automatically.
B. Capture all operating data, train machine learning models that identify ideal operations, and run locally
to make operational adjustments automatically.
C. Implement a Google Cloud Dataflow streaming job with a sliding window, and use Google Cloud
Messaging (GCM) to make operational adjustments automatically.
D. Capture all operating data, train machine learning models that identify ideal operations, and host in
Google Cloud Machine Learning (ML) Platform to make operational adjustments automatically.
Answer: (SHOW ANSWER)

NEW QUESTION: 30
A. Export Cloud Machine Learning Engine performance metrics from Stackdriver to BigQuery, to be used
to analyze the efficiency of the model.
B. Build a roadmap to move the machine learning model training from Cloud GPUs to Cloud TPUs,
which offer better results.
C. Monitor Compute Engine announcements for availability of newer CPU architectures, and deploy the
model to them as soon as they are available for additional performance.
D. Save a history of recommendations and results of the recommendations in BigQuery, to be used as
training data.
Answer: (SHOW ANSWER)
https://cloud.google.com/solutions/building-a-serverless-ml-model

NEW QUESTION: 31
You have broken down a legacy monolithic application into a few containerized RESTful microservices.
You want to run those microservices on Cloud Run. You also want to make sure the services are highly
available with low latency to your customers. What should you do?
A. Deploy Cloud Run services to multiple availability zones. Create Cloud Endpoints that point to the
services. Create a global HTIP(S) Load Balancing instance and attach the Cloud Endpoints to its
backend.
B. Deploy Cloud Run services to multiple regions Create serverless network endpoint groups pointing to
the services. Add the serverless NE Gs to a backend service that is used by a global HTIP(S) Load
Balancing instance.
C. Cloud Run services to multiple regions. In Cloud DNS, create a latency-based DNS name that points
to the services.
D. Deploy Cloud Run services to multiple availability zones. Create a TCP/IP global load balancer. Add
the Cloud Run Endpoints to its backend service.
Answer: (SHOW ANSWER)
https://cloud.google.com/run/docs/multiple-regions

Valid Professional-Cloud-Architect Dumps shared by Fast2test.com for Helping Passing


Professional-Cloud-Architect Exam! Fast2test.com now offer the newest Professional-Cloud-
Architect exam dumps, the Fast2test.com Professional-Cloud-Architect exam questions have been
updated and answers have been corrected get the newest Fast2test.com Professional-Cloud-
Architect dumps with Test Engine here: https://www.fast2test.com/Professional-Cloud-Architect-
premium-file.html (251 Q&As Dumps, 30%OFF Special Discount: freecram)

NEW QUESTION: 32
Your company has a project in Google Cloud with three Virtual Private Clouds (VPCs). There is a
Compute Engine instance on each VPC. Network subnets do not overlap and must remain separated.
The network configuration is shown below.

Instance #1 is an exception and must communicate directly with both Instance #2 and Instance #3 via
internal IPs. How should you accomplish this?
A. Create a cloud router to advertise subnet #2 and subnet #3 to subnet #1.
B. Add two additional NICs to Instance #1 with the following configuration:
* NIC1
* VPC: VPC #2
* SUBNETWORK: subnet #2
* NIC2
* VPC: VPC #3
* SUBNETWORK: subnet #3
Update firewall rules to enable traffic between instances.
C. Create two VPN tunnels via CloudVPN:
* 1 between VPC #1 and VPC #2.
* 1 between VPC #2 and VPC #3.
Update firewall rules to enable traffic between the instances.
D. Peer all three VPCs:
* Peer VPC #1 with VPC #2.
* Peer VPC #2 with VPC #3.
Update firewall rules to enable traffic between the instances.
Answer: (SHOW ANSWER)
As per GCP documentation: "By default, every instance in a VPC network has a single network interface.
Use these instructions to create additional network interfaces. Each interface is attached to a different
VPC network, giving that instance access to different VPC networks in Google Cloud. You cannot attach
multiple network interfaces to the same VPC network." Refer to:
https://cloud.google.com/vpc/docs/create-use-multiple-interfaces
https://cloud.google.com/vpc/docs/create-use-multiple-
interfaces#i_am_not_able_to_connect_to_secondary_interfaces_internal_ip

NEW QUESTION: 33
Your company is planning to upload several important files to Cloud Storage. After the upload is
completed, they want to verify that the upload content is identical to what they have on- premises. You
want to minimize the cost and effort of performing this check. What should you do?
A. 1) Use gsutil -m to upload all the files to Cloud Storage.
2) Use gsutil cp to download the uploaded files
3) Use Linux diff to compare the content of the files
B. 1) Use gsutil -m to upload all the files to Cloud Storage.
2) Develop a custom Java application that computes CRC32C hashes
3) Use gsutil ls -L gs://[YOUR_BUCKET_NAME] to collect CRC32C hashes of the uploaded files
4) Compare the hashes
C. 1) Use Linux shasum to compute a digest of files you want to upload
2) Use gsutil -m to upload all the files to the Cloud Storage
3) Use gsutil cp to download the uploaded files
4) Use Linux shasum to compute a digest of the downloaded files 5.Compre the hashes
D. 1) Use gsutil -m to upload all the files to Cloud Storage.
2) Use gsutil hash -c FILE_NAME to generate CRC32C hashes of all on-premises files
3) Use gsutil ls -L gs://[YOUR_BUCKET_NAME] to collect CRC32C hashes of the uploaded files
4) Compare the hashes
Answer: (SHOW ANSWER)
https://cloud.google.com/storage/docs/gsutil/commands/hash
NEW QUESTION: 34
Your organization wants to control IAM policies for different departments independently, but centrally.
Which approach should you take?
A. Multiple Organizations with multiple Folders
B. Multiple Organizations, one for each department
C. A single Organization with Folder for each department
D. A single Organization with multiple projects, each with a central owner
Answer: (SHOW ANSWER)
Folders are nodes in the Cloud Platform Resource Hierarchy. A folder can contain projects, other folders,
or a combination of both. You can use folders to group projects under an organization in a hierarchy. For
example, your organization might contain multiple departments, each with its own set of GCP resources.
Folders allow you to group these resources on a per-department basis. Folders are used to group
resources that share common IAM policies. While a folder can contain multiple folders or resources, a
given folder or resource can have exactly one parent.
References: https://cloud.google.com/resource-manager/docs/creating-managing-folders

NEW QUESTION: 35
Mountkirk Games wants you to secure the connectivity from the new gaming application platform to
Google Cloud. You want to streamline the process and follow Google-recommended practices. What
should you do?
A. Configure HashiCorp Vault on Compute Engine, and use customer managed encryption keys and
Cloud Key Management Service (Cloud KMS) to manage the encryption keys. Configure these Secrets
to be used by the application platform.
B. Configure Workload Identity and service accounts to be used by the application platform.
C. Configure Kubernetes Secrets to store the secret, enable Application-Layer Secrets Encryption, and
use Cloud Key Management Service (Cloud KMS) to manage the encryption keys. Configure these
Secrets to be used by the application platform.
D. Use Kubernetes Secrets, which are obfuscated by default. Configure these Secrets to be used by the
application platform.
Answer: (SHOW ANSWER)

NEW QUESTION: 36
You have deployed several instances on Compute Engine. As a security requirement, instances cannot
have a public IP address. There is no VPN connection between Google Cloud and your office, and you
need to connect via SSH into a specific machine without violating the security requirements. What should
you do?
A. Configure Cloud NAT on the subnet where the instance is hosted. Create an SSH connection to the
Cloud NAT IP address to reach the instance.
B. Add all instances to an unmanaged instance group. Configure TCP Proxy Load Balancing with the
instance group as a backend. Connect to the instance using the TCP Proxy IP.
C. Configure Identity-Aware Proxy (IAP) for the instance and ensure that you have the role of IAP-
secured Tunnel User. Use the gcloud command line tool to ssh into the instance.
D. Create a bastion host in the network to SSH into the bastion host from your office location. From the
bastion host, SSH into the desired instance.
Answer: (SHOW ANSWER)
https://cloud.google.com/iap/docs/using-tcp-forwarding#tunneling_with_ssh Leveraging the BeyondCorp
security model. "This January, we enhanced context-aware access capabilities in Cloud Identity-Aware
Proxy (IAP) to help you protect SSH and RDP access to your virtual machines (VMs)-without needing to
provide your VMs with public IP addresses, and without having to set up bastion hosts. "
https://cloud.google.com/blog/products/identity-security/cloud-iap-enables-context-aware-access-to-vms-
via-ssh-and-rdp-without-bastion-hosts

NEW QUESTION: 37
You want to create a private connection between your instances on Compute Engine and your on-
premises data center. You require a connection of at least 20 Gbps. You want to follow Google-
recommended practices.
How should you set up the connection?
A. Create a Cloud Content Delivery Network (Cloud CDN) and connect it to your on-premises data
center using Dedicated Interconnect.
B. Create a Cloud Content Delivery Network (Cloud CDN) and connect it to your on-premises datacenter
using a single Cloud VPN.
C. Create a VPC and connect it to your on-premises data center using Dedicated Interconnect.
D. Create a VPC and connect it to your on-premises data center using a single Cloud VPN.
Answer: (SHOW ANSWER)

NEW QUESTION: 38
A lead engineer wrote a custom tool that deploys virtual machines in the legacy data center. He wants to
migrate the custom tool to the new cloud environment You want to advocate for the adoption of Google
Cloud Deployment Manager What are two business risks of migrating to Cloud Deployment Manager?
Choose 2 answers
A. Cloud Deployment Manager uses Python.
B. Cloud Deployment Manager APIs could be deprecated in the future.
C. Cloud Deployment Manager is unfamiliar to the company's engineers.
D. Cloud Deployment Manager requires a Google APIs service account to run.
E. Cloud Deployment Manager can be used to permanently delete cloud resources.
F. Cloud Deployment Manager only supports automation of Google Cloud resources.
Answer: (SHOW ANSWER)
https://cloud.google.com/deployment-manager/docs/deployments/deleting-deployments

NEW QUESTION: 39
For this question, refer to the TerramEarth case study.
To speed up data retrieval, more vehicles will be upgraded to cellular connections and be able to
transmit data to the ETL process. The current FTP process is error-prone and restarts the data transfer
from the start of the file when connections fail, which happens often. You want to improve the reliability of
the solution and minimize data transfer time on the cellular connections. What should you do?
A. Use one Google Container Engine cluster of FTP servers. Save the data to a Multi-Regional bucket.
Run the ETL process using data in the bucket.
B. Use multiple Google Container Engine clusters running FTP servers located in different regions. Save
the data to Multi-Regional buckets in us, eu, and asia. Run the ETL process using the data in the bucket.
C. Directly transfer the files to different Google Cloud Multi-Regional Storage bucket locations in us, eu,
and asia using Google APIs over HTTP(S). Run the ETL process using the data in the bucket.
D. Directly transfer the files to a different Google Cloud Regional Storage bucket location in us, eu, and
asia using Google APIs over HTTP(S). Run the ETL process to retrieve the data from each Regional
bucket.
Answer: (SHOW ANSWER)
https://cloud.google.com/storage/docs/locations

NEW QUESTION: 40
A. Enable Virtual Private Cloud (VPC) flow logging.
B. Enable Firewall Rules Logging for the firewall rules you want to monitor.
C. Verify that your user account is assigned the compute.networkAdmin Identity and Access
Management (IAM) role.
D. Install the Google Cloud SDK, and verify that there are no Firewall logs in the command line output.
Answer: (SHOW ANSWER)

NEW QUESTION: 41
Your organization has decided to restrict the use of external IP addresses on instances to only approved
instances. You want to enforce this requirement across all of your Virtual Private Clouds (VPCs). What
should you do?
A. Remove the default route on all VPCs. Move all approved instances into a new subnet that has a
default route to an internet gateway.
B. Create a new VPC in custom mode. Create a new subnet for the approved instances, and set a
default route to the internet gateway on this new subnet.
C. Implement a Cloud NAT solution to remove the need for external IP addresses entirely.
D. Set an Organization Policy with a constraint on constraints/compute.vmExternalIpAccess. List the
approved instances in the allowedValues list.
Answer: D (LEAVE A REPLY)
Reference:
https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-
address#disableexternalip you might want to restrict external IP address so that only specific VM
instances can use them. This option can help to prevent data exfiltration or maintain network isolation.
Using an Organization Policy, you can restrict external IP addresses to specific VM instances with
constraints to control use of external IP addresses for your VM instances within an organization or a
project.

NEW QUESTION: 42
Your customer is moving an existing corporate application to Google Cloud Platform from an on-
premises data center. The business owners require minimal user disruption. There are strict security
team requirements for storing passwords. What authentication strategy should they use?
A. Use G Suite Password Sync to replicate passwords into Google.
B. Federate authentication via SAML 2.0 to the existing Identity Provider.
C. Provision users in Google using the Google Cloud Directory Sync tool.
D. Ask users to set their Google password to match their corporate password.
Answer: (SHOW ANSWER)
https://cloud.google.com/solutions/authenticating-corporate-users-in-a-hybrid-environment

NEW QUESTION: 43
You have deployed an application to Kubernetes Engine, and are using the Cloud SQL proxy container
to make the Cloud SQL database available to the services running on Kubernetes. You are notified that
the application is reporting database connection issues. Your company policies require a post-mortem.
What should you do?
A. In the GCP Console, navigate to Stackdriver Logging. Consult logs for Kubernetes Engine and Cloud
SQL.
B. Validate that the Service Account used by the Cloud SQL proxy container still has the Cloud Build
Editor role.
C. In the GCP Console, navigate to Cloud SQL. Restore the latest backup. Use kubect1 to restart all
pods.
D. Use gcloud sql instances restart.
Answer: (SHOW ANSWER)

NEW QUESTION: 44
You need to optimize batch file transfers into Cloud Storage for Mountkirk Games' new Google Cloud
solution.
The batch files contain game statistics that need to be staged in Cloud Storage and be processed by an
extract transform load (ETL) tool. What should you do?
A. Use gsutil to batch copy the files in parallel.
B. Use gsutil to extract the files as the first part of ETL.
C. Use gsutil to batch move files in sequence.
D. Use gsutil to load the files as the last part of ETL.
Answer: (SHOW ANSWER)

NEW QUESTION: 45
You are developing an application using different microservices that should remain internal to the cluster.
You want to be able to configure each microservice with a specific number of replicas. You also want to
be able to address a specific microservice from any other microservice in a uniform way, regardless of
the number of replicas the microservice scales to. You need to implement this solution on Google
Kubernetes Engine. What should you do?
A. Deploy each microservice as a Deployment. Expose the Deployment in the cluster using a Service,
and use the Service DNS name to address it from other microservices within the cluster.
B. Deploy each microservice as a Deployment. Expose the Deployment in the cluster using an Ingress,
and use the Ingress IP address to address the Deployment from other microservices within the cluster.
C. Deploy each microservice as a Pod. Expose the Pod in the cluster using a Service, and use the
Service DNS name to address the microservice from other microservices within the cluster.
D. Deploy each microservice as a Pod. Expose the Pod in the cluster using an Ingress, and use the
Ingress IP address name to address the Pod from other microservices within the cluster.
Answer: (SHOW ANSWER)
https://kubernetes.io/docs/concepts/services-networking/ingress/

NEW QUESTION: 46
You are analyzing and defining business processes to support your startup's trial usage of GCP, and you
don't yet know what consumer demand for your product will be. Your manager requires you to minimize
GCP service costs and adhere to Google best practices. What should you do?
A. Utilize free tier and sustained use discounts. Provision a staff position for service cost management.
B. Utilize free tier and sustained use discounts. Provide training to the team about service cost
management.
C. Utilize free tier and committed use discounts. Provision a staff position for service cost management.
D. Utilize free tier and committed use discounts. Provide training to the team about service cost
management.
Answer: D (LEAVE A REPLY)
https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-
organizations#billing_and_management

Valid Professional-Cloud-Architect Dumps shared by Fast2test.com for Helping Passing


Professional-Cloud-Architect Exam! Fast2test.com now offer the newest Professional-Cloud-
Architect exam dumps, the Fast2test.com Professional-Cloud-Architect exam questions have been
updated and answers have been corrected get the newest Fast2test.com Professional-Cloud-
Architect dumps with Test Engine here: https://www.fast2test.com/Professional-Cloud-Architect-
premium-file.html (251 Q&As Dumps, 30%OFF Special Discount: freecram)

NEW QUESTION: 47
You have an application that runs in Google Kubernetes Engine (GKE). Over the last 2 weeks,
customers have reported that a specific part of the application returns errors very frequently. You
currently have no logging or monitoring solution enabled on your GKE cluster. You want to diagnose the
problem, but you have not been able to replicate the issue. You want to cause minimal disruption to the
application. What should you do?
A. 1. Create a new GKE cluster with Cloud Operations for GKE enabled.
2. Migrate the affected Pods to the new cluster, and redirect traffic for those Pods to the new cluster.
3. Use the GKE Monitoring dashboard to investigate logs from affected Pods.
B. 1. Update your GKE cluster to use Cloud Operations for GKE, and deploy Prometheus.
2. Set an alert to trigger whenever the application returns an error.
C. 1. Create a new GKE cluster with Cloud Operations for GKE enabled, and deploy Prometheus.
2. Migrate the affected Pods to the new cluster, and redirect traffic for those Pods to the new cluster.
3. Set an alert to trigger whenever the application returns an error.
D. 1. Update your GKE cluster to use Cloud Operations for GKE.
2. Use the GKE Monitoring dashboard to investigate logs from affected Pods.
Answer: (SHOW ANSWER)

NEW QUESTION: 48
Your customer is receiving reports that their recently updated Google App Engine application is taking
approximately 30 seconds to load for some of their users. This behavior was not reported before the
update. What strategy should you take?
A. Work with your ISP to diagnose the problem.
B. Open a support ticket to ask for network capture and flow data to diagnose the problem, then roll back
your application.
C. Roll back to an earlier known good release initially, then use Stackdriver Trace and logging to
diagnose the problem in a development/test/staging environment.
D. Roll back to an earlier known good release, then push the release again at a quieter period to
investigate. Then use Stackdriver Trace and logging to diagnose the problem.
Answer: (SHOW ANSWER)
Stackdriver Logging allows you to store, search, analyze, monitor, and alert on log data and events from
Google Cloud Platform and Amazon Web Services (AWS). Our API also allows ingestion of any custom
log data from any source. Stackdriver Logging is a fully managed service that performs at scale and can
ingest application and system log data from thousands of VMs. Even better, you can analyze all that log
data in real time.
References: https://cloud.google.com/logging/

NEW QUESTION: 49
Your company sends all Google Cloud logs to Cloud Logging. Your security team wants to monitor the
logs. You want to ensure that the security team can react quickly if an anomaly such as an unwanted
firewall change or server breach is detected. You want to follow Google-recommended practices. What
should you do?
A. Schedule a cron job with Cloud Scheduler. The scheduled job queries the logs every minute for the
relevant events.
B. Export logs to BigQuery, and trigger a query in BigQuery to process the log data for the relevant
events.
C. Export logs to a Pub/Sub topic, and trigger Cloud Function with the relevant log events.
D. Export logs to a Cloud Storage bucket, and trigger Cloud Run with the relevant log events.
Answer: (SHOW ANSWER)
https://cloud.google.com/blog/products/management-tools/automate-your-response-to-a-cloud-logging-
event

NEW QUESTION: 50
A. Upgrade the bandwidth on the Dedicated Interconnect connection to 100 G.
B. Add a new Carrier Peering connection.
C. Add a new Dedicated Interconnect connection.
D. Add three new Cloud VPN connections.
Answer: (SHOW ANSWER)

NEW QUESTION: 51
For this question, refer to the TerramEarth case study. A new architecture that writes all incoming data to
BigQuery has been introduced. You notice that the data is dirty, and want to ensure data quality on an
automated daily basis while managing cost.
What should you do?
A. Create a SQL statement on the data in BigQuery, and save it as a view. Run the view daily, and save
the result to a new table.
B. Set up a streaming Cloud Dataflow job, receiving data by the ingestion process. Clean the data in a
Cloud Dataflow pipeline.
C. Use Cloud Dataprep and configure the BigQuery tables as the source. Schedule a daily job to clean
the data.
D. Create a Cloud Function that reads data from BigQuery and cleans it. Trigger it. Trigger the Cloud
Function from a Compute Engine instance.
Answer: (SHOW ANSWER)

NEW QUESTION: 52
You want to automate the creation of a managed instance group and a startup script to install the OS
package dependencies. You want to minimize the startup time for VMs in the instance group.
What should you do?
A. Use Terraform to create the managed instance group and a startup script to install the OS package
dependencies.
B. Create a custom VM image with all OS package dependencies. Use Deployment Manager to create
the managed instance group with the VM image.
C. Use Puppet to create the managed instance group and install the OS package dependencies.
D. Use Deployment Manager to create the managed instance group and Ansible to install the OS
package dependencies.
Answer: (SHOW ANSWER)
"Custom images are more deterministic and start more quickly than instances with startup scripts.
However, startup scripts are more flexible and let you update the apps and settings in your instances
more easily." https://cloud.google.com/compute/docs/instance-templates/create-instance-
templates#using_custom_or_public_images_in_your_instance_templates

NEW QUESTION: 53
A lead software engineer tells you that his new application design uses websockets and HTTP sessions
that are not distributed across the web servers. You want to help him ensure his application will run
property on Google Cloud Platform. What should you do?
A. Help the engineer to convert his websocket code to use HTTP streaming.
B. Review the encryption requirements for websocket connections with the security team.
C. Meet with the cloud operations team and the engineer to discuss load balancer options.
D. Help the engineer redesign the application to use a distributed user session service that does not rely
on websockets and HTTP sessions.
Answer: (SHOW ANSWER)
Google Cloud Platform (GCP) HTTP(S) load balancing provides global load balancing for HTTP(S)
requests destined for your instances.
The HTTP(S) load balancer has native support for the WebSocket protocol.
Incorrect Answers:
A: HTTP server push, also known as HTTP streaming, is a client-server communication pattern that
sends information from an HTTP server to a client asynchronously, without a client request. A server
push architecture is especially effective for highly interactive web or mobile applications, where one or
more clients need to receive continuous information from the server.
References: https://cloud.google.com/compute/docs/load-balancing/http/

NEW QUESTION: 54
You are deploying a PHP App Engine Standard service with SQL as the backend. You want to minimize
the number of queries to the database.
What should you do?
A. Set the memcache service level to dedicated. Create a key from the hash of the query, and return
database values from memcache before issuing a query to Cloud SQL.
B. Set the memcache service level to dedicated. Create a cron task that runs every minute to populate
the cache with keys containing query results.
C. Set the memcache service level to shared. Create a cron task that runs every minute to save all
expected queries to a key called "cached-queries".
D. Set the memcache service level to shared. Create a key called "cached-queries", and return database
values from the key before using a query to Cloud SQL.
Answer: (SHOW ANSWER)
https://cloud.google.com/appengine/docs/standard/php/memcache/using

NEW QUESTION: 55
Your company is using Google Cloud. You have two folders under the Organization: Finance and
Shopping. The members of the development team are in a Google Group. The development team group
has been assigned the Project Owner role on the Organization. You want to prevent the development
team from creating resources in projects in the Finance folder. What should you do?
A. Assign the development team group the Project Viewer role on the Finance folder, and assign the
development team group the Project Owner role on the Shopping folder.
B. Assign the development team group only the Project Viewer role on the Finance folder.
C. Assign the development team group the Project Owner role on the Shopping folder, and remove the
development team group Project Owner role from the Organization.
D. Assign the development team group only the Project Owner role on the Shopping folder.
Answer: (SHOW ANSWER)
https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy
"Roles are always inherited, and there is no way to explicitly remove a permission for a lower-level
resource that is granted at a higher level in the resource hierarchy. Given the above example, even if you
were to remove the Project Editor role from Bob on the "Test GCP Project", he would still inherit that role
from the "Dept Y" folder, so he would still have the permissions for that role on "Test GCP Project"."

NEW QUESTION: 56
Your company places a high value on being responsive and meeting customer needs quickly. Their
primary business objectives are release speed and agility. You want to reduce the chance of security
errors being accidentally introduced. Which two actions can you take? Choose 2 answers
A. Ensure every code check-in is peer reviewed by a security SME.
B. Use source code security analyzers as part of the CI/CD pipeline.
C. Ensure you have stubs to unit test all interfaces between components.
D. Enable code signing and a trusted binary repository integrated with your CI/CD pipeline.
E. Run a vulnerability security scanner as part of your continuous-integration /continuous-delivery
(CI/CD) pipeline.
Answer: (SHOW ANSWER)
https://docs.microsoft.com/en-us/vsts/articles/security-validation-cicd-pipeline?view=vsts

NEW QUESTION: 57
Your organization has a 3-tier web application deployed in the same network on Google Cloud Platform.
Each tier (web, API, and database) scales independently of the others Network traffic should flow
through the web to the API tier and then on to the database tier. Traffic should not flow between the web
and the database tier. How should you configure the network?
A. Add each tier to a different subnetwork.
B. Set up software based firewalls on individual VMs.
C. Add tags to each tier and set up routes to allow the desired traffic flow.
D. Add tags to each tier and set up firewall rules to allow the desired traffic flow.
Answer: (SHOW ANSWER)
https://aws.amazon.com/blogs/aws/building-three-tier-architectures-with-security-groups/ Google Cloud
Platform(GCP) enforces firewall rules through rules and tags. GCP rules and tags can be defined once
and used across all regions.
References: https://cloud.google.com/docs/compare/openstack/
https://aws.amazon.com/it/blogs/aws/building-three-tier-architectures-with-security-groups/

NEW QUESTION: 58
For this question, refer to the TerramEarth case study.
You start to build a new application that uses a few Cloud Functions for the backend. One use case
requires a Cloud Function func_display to invoke another Cloud Function func_query. You want
func_query only to accept invocations from func_display. You also want to follow Google's
recommended best practices. What should you do?
A. Create a token and pass it in as an environment variable to func_display. When invoking func_query,
include the token in the request Pass the same token to func _query and reject the invocation if the
tokens are different.
B. Make func_query 'Require authentication.' Create a unique service account and associate it to
func_display. Grant the service account invoker role for func_query. Create an id token in func_display
and include the token to the request when invoking func_query.
C. Make func _query 'Require authentication' and only accept internal traffic. Create those two functions
in the same VPC. Create an ingress firewall rule for func_query to only allow traffic from func_display.
D. Create those two functions in the same project and VPC. Make func_query only accept internal traffic.
Create an ingress firewall for func_query to only allow traffic from func_display. Also, make sure both
functions use the same service account.
Answer: (SHOW ANSWER)
https://cloud.google.com/functions/docs/securing/authenticating#authenticating_function_to_function_calls

NEW QUESTION: 59
You need to ensure reliability for your application and operations by supporting reliable task a scheduling
for compute on GCP. Leveraging Google best practices, what should you do?
A. Using the Cron service provided by App Engine, publishing messages directly to a message-
processing utility service running on Compute Engine instances.
B. Using the Cron service provided by App Engine, publish messages to a Cloud Pub/Sub topic.
Subscribe to that topic using a message-processing utility service running on Compute Engine instances.
C. Using the Cron service provided by Google Kubernetes Engine (GKE), publish messages directly to a
message-processing utility service running on Compute Engine instances.
D. Using the Cron service provided by GKE, publish messages to a Cloud Pub/Sub topic. Subscribe to
that topic using a message-processing utility service running on Compute Engine instances.
Answer: (SHOW ANSWER)
https://cloud.google.com/solutions/reliable-task-scheduling-compute-engine
NEW QUESTION: 60
A. Port the application code to run on Google App Engine.
B. Integrate Cloud Dataflow into the application to capture real-time metrics.
C. Instrument the application with a monitoring tool like Stackdriver Debugger.
D. Select an automation framework to reliably provision the cloud infrastructure.
E. Deploy a continuous integration tool with automated testing in a staging environment.
F. Migrate from MySQL to a managed NoSQL database like Google Cloud Datastore or Bigtable.
Answer: (SHOW ANSWER)
References: https://cloud.google.com/appengine/docs/standard/java/tools/uploadinganapp
https://cloud.google.com/appengine/docs/standard/java/building-app/cloud-sql

NEW QUESTION: 61
For this question, refer to the TerramEarth case study. You are asked to design a new architecture for
the ingestion of the data of the 200,000 vehicles that are connected to a cellular network. You want to
follow Google-recommended practices.
Considering the technical requirements, which components should you use for the ingestion of the data?
A. Google Kubernetes Engine with an SSL Ingress
B. Cloud IoT Core with public/private key pairs
C. Compute Engine with project-wide SSH keys
D. Compute Engine with specific SSH keys
Answer: (SHOW ANSWER)
https://cloud.google.com/solutions/iot-overview
https://cloud.google.com/iot/quotas

Valid Professional-Cloud-Architect Dumps shared by Fast2test.com for Helping Passing


Professional-Cloud-Architect Exam! Fast2test.com now offer the newest Professional-Cloud-
Architect exam dumps, the Fast2test.com Professional-Cloud-Architect exam questions have been
updated and answers have been corrected get the newest Fast2test.com Professional-Cloud-
Architect dumps with Test Engine here: https://www.fast2test.com/Professional-Cloud-Architect-
premium-file.html (251 Q&As Dumps, 30%OFF Special Discount: freecram)

NEW QUESTION: 62
For this question, refer to the Dress4Win case study.
The Dress4Win security team has disabled external SSH access into production virtual machines (VMs)
on Google Cloud Platform (GCP). The operations team needs to remotely manage the VMs, build and
push Docker containers, and manage Google Cloud Storage objects. What can they do?
A. Develop a new access request process that grants temporary SSH access to cloud VMs when an
operations engineer needs to perform a task.
B. Grant the operations engineers access to use Google Cloud Shell.
C. Configure a VPN connection to GCP to allow SSH access to the cloud VMs.
D. Have the development team build an API service that allows the operations team to execute specific
remote procedure calls to accomplish their tasks.
Answer: (SHOW ANSWER)

NEW QUESTION: 63
You are developing a globally scaled frontend for a legacy streaming backend data API. This API
expects events in strict chronological order with no repeat data for proper processing.
Which products should you deploy to ensure guaranteed-once FIFO (first-in, first-out) delivery of data?
A. Cloud Pub/Sub alone
B. Cloud Pub/Sub to Cloud DataFlow
C. Cloud Pub/Sub to Stackdriver
D. Cloud Pub/Sub to Cloud SQL
Answer: (SHOW ANSWER)
Reference https://cloud.google.com/pubsub/docs/ordering

NEW QUESTION: 64
For this question, refer to the Dress4Win case study.
As part of Dress4Win's plans to migrate to the cloud, they want to be able to set up a managed logging
and monitoring system so they can handle spikes in their traffic load. They want to ensure that:
* The infrastructure can be notified when it needs to scale up and down to handle the ebb and flow of
usage throughout the day
* Their administrators are notified automatically when their application reports errors.
* They can filter their aggregated logs down in order to debug one piece of the application across many
hosts Which Google StackDriver features should they use?
A. Monitoring, Logging, Debug, Error Report
B. Monitoring, Logging, Alerts, Error Reporting
C. Logging, Alerts, Insights, Debug
D. Monitoring, Trace, Debug, Logging
Answer: (SHOW ANSWER)
Topic 2, TerramEarth
Solution Concept
There are 20 million TerramEarth vehicles in operation that collect 120 fields of data per second. Data is
stored locally on the vehicle and can be accessed for analysis when a vehicle is serviced. The data is
downloaded via a maintenance port. This same port can be used to adjust operational parameters,
allowing the vehicles to be upgraded in the field with new computing modules.
Approximately 200,000 vehicles are connected to a cellular network, allowing TerramEarth to collect data
directly. At a rate of 120 fields of data per second, with 22 hours of operation per day. TerramEarth
collects a total of about 9 TB/day from these connected vehicles.
Existing Technical Environment
TerramEarth's existing architecture is composed of Linux-based systems that reside in a data center.
These systems gzip CSV files from the field and upload via FTP, transform and aggregate them, and
place the data in their data warehouse. Because this process takes time, aggregated reports are based
on data that is 3 weeks old.
With this data, TerramEarth has been able to preemptively stock replacement parts and reduce
unplanned downtime of their vehicles by 60%. However, because the data is stale, some customers are
without their vehicles for up to 4 weeks while they wait for replacement parts.
Business Requirements
* Decrease unplanned vehicle downtime to less than 1 week, without increasing the cost of carrying
surplus inventory
* Support the dealer network with more data on how their customers use their equipment IP better
position new products and services.
* Have the ability to partner with different companies-especially with seed and fertilizer suppliers in the
fast-growing agricultural business-to create compelling joint offerings for their customers CEO Statement
We have been successful in capitalizing on the trend toward larger vehicles to increase the productivity
of our customers. Technological change is occurring rapidly and TerramEarth has taken advantage of
connected devices technology to provide our customers with better services, such as our intelligent
farming equipment. With this technology, we have been able to increase farmers' yields by 25%, by
using past trends to adjust how our vehicles operate. These advances have led to the rapid growth of our
agricultural product line, which we expect will generate 50% of our revenues by 2020.
CTO Statement
Our competitive advantage has always been in the manufacturing process with our ability to build better
vehicles for tower cost than our competitors. However, new products with different approaches are
constantly being developed, and I'm concerned that we lack the skills to undergo the next wave of
transformations in our industry. Unfortunately, our CEO doesn't take technology obsolescence seriously
and he considers the many new companies in our industry to be niche players. My goals are to build our
skills while addressing immediate market needs through incremental innovations.

NEW QUESTION: 65
During a high traffic portion of the day, one of your relational databases crashes, but the replica is never
promoted to a master. You want to avoid this in the future. What should you do?
A. Use a different database.
B. Choose larger instances for your database.
C. Create snapshots of your database more regularly.
D. Implement routinely scheduled failovers of your databases.
Answer: D (LEAVE A REPLY)
https://cloud.google.com/solutions/dr-scenarios-planning-guide

NEW QUESTION: 66
You are creating an App Engine application that uses Cloud Datastore as its persistence layer. You need
to retrieve several root entities for which you have the identifiers. You want to minimize the overhead in
operations performed by Cloud Datastore. What should you do?
A. Create the Key object for each Entity and run a batch get operation
B. Create the Key object for each Entity and run multiple get operations, one operation for each entity
C. Use the identifiers to create a query filter and run a batch query operation
D. Use the identifiers to create a query filter and run multiple query operations, one operation for each
entity
Answer: (SHOW ANSWER)
https://cloud.google.com/datastore/docs/concepts/entities#datastore-datastore-batch-upsert-nodejs

NEW QUESTION: 67
Your company operates nationally and plans to use GCP for multiple batch workloads, including some
that are not time-critical. You also need to use GCP services that are HIPAA-certified and manage
service costs.
How should you design to meet Google best practices?
A. Provisioning preemptible VMs to reduce cost. Discontinue use of all GCP services and APIs that are
not HIPAA-compliant.
B. Provisioning preemptible VMs to reduce cost. Disable and then discontinue use of all GCP and APIs
that are not HIPAA-compliant.
C. Provision standard VMs in the same region to reduce cost. Discontinue use of all GCP services and
APIs that are not HIPAA-compliant.
D. Provision standard VMs to the same region to reduce cost. Disable and then discontinue use of all
GCP services and APIs that are not HIPAA-compliant.
Answer: B (LEAVE A REPLY)
https://cloud.google.com/security/compliance/hipaa/

NEW QUESTION: 68
You want to enable your running Google Kubernetes Engine cluster to scale as demand for your
application changes.
What should you do?
A. Add additional nodes to your Kubernetes Engine cluster using the following command:gcloud
container clusters resizeCLUSTER_Name - -size 10
B. Add a tag to the instances in the cluster with the following command:gcloud compute instances add-
tagsINSTANCE - -tags enable-autoscaling max-nodes-10
C. Update the existing Kubernetes Engine cluster with the following command:gcloud alpha container
clustersupdate mycluster - -enable-autoscaling - -min-nodes=1 - -max-nodes=10
D. Create a new Kubernetes Engine cluster with the following command:gcloud alpha container
clusterscreate mycluster - -enable-autoscaling - -min-nodes=1 - -max-nodes=10and redeploy your
application
Answer: (SHOW ANSWER)
https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler To enable autoscaling for
an existing node pool, run the following command:
gcloud container clusters update [CLUSTER_NAME] --enable-autoscaling \ --min-nodes 1 --max-nodes
10 --zone [COMPUTE_ZONE] --node-pool default-pool

NEW QUESTION: 69
You are helping the QA team to roll out a new load-testing tool to test the scalability of your primary
cloud services that run on Google Compute Engine with Cloud Bigtable. Which three requirements
should they include? Choose 3 answers
A. Ensure that the load tests validate the performance of Cloud Bigtable.
B. Create a separate Google Cloud project to use for the load-testing environment.
C. Ensure all third-party systems your services use are capable of handling high load.
D. Instrument the load-testing tool and the target services with detailed logging and metrics collection.
E. Instrument the production services to record every transaction for replay by the load-testing tool.
F. Schedule the load-testing tool to regularly run against the production environment.
Answer: A,B,D (LEAVE A REPLY)

NEW QUESTION: 70
A. Use the --no-auto-delete flag on all persistent disks and stop the VM.
B. Use the -auto-delete flag on all persistent disks and terminate the VM.
C. Apply VM CPU utilization label and include it in the BigQuery billing export.
D. Use Google BigQuery billing export and labels to associate cost to groups.
E. Store all state into local SSD, snapshot the persistent disks, and terminate the VM.
F. Store all state in Google Cloud Storage, snapshot the persistent disks, and terminate the VM.
Answer: (SHOW ANSWER)
https://cloud.google.com/billing/docs/how-to/export-data-bigquery

NEW QUESTION: 71
You are migrating your on-premises solution to Google Cloud in several phases. You will use Cloud VPN
to maintain a connection between your on-premises systems and Google Cloud until the migration is
completed.
You want to make sure all your on-premises systems remain reachable during this period. How should
you organize your networking in Google Cloud?
A. Use an IP range on Google Cloud that does not overlap with the range you use on-premises for your
primary IP range and use a secondary range with the same IP range as you use on-premises
B. Use the same IP range on Google Cloud as you use on-premises
C. Use the same IP range on Google Cloud as you use on-premises for your primary IP range and use a
secondary range that does not overlap with the range you use on-premises
D. Use an IP range on Google Cloud that does not overlap with the range you use on-premises
Answer: (SHOW ANSWER)

NEW QUESTION: 72
Your company is migrating its on-premises data center into the cloud. As part of the migration, you want
to integrate Kubernetes Engine for workload orchestration. Parts of your architecture must also be PCI
DSScompliant.
Which of the following is most accurate?
A. App Engine is the only compute platform on GCP that is certified for PCI DSS hosting.
B. Kubernetes Engine cannot be used under PCI DSS because it is considered shared hosting.
C. Kubernetes Engine and GCP provide the tools you need to build a PCI DSS-compliant environment.
D. All Google Cloud services are usable because Google Cloud Platform is certified PCI-compliant.
Answer: (SHOW ANSWER)
https://cloud.google.com/security/compliance/pci-dss

NEW QUESTION: 73
You have an application that will run on Compute Engine. You need to design an architecture that takes
into account a disaster recovery plan that requires your application to fail over to another region in case
of a regional outage. What should you do?
A. Deploy the application on a Compute Engine instance. Use the instance to serve traffic, and use the
HTTP load balancing service to fail over to an instance on your premises in case of a disaster.
B. Deploy the application on two Compute Engine instances in the same project but in a different region.
Use the first instance to serve traffic, and use the HTTP load balancing service to fail over to the standby
instance in case of a disaster.
C. Deploy the application on two Compute Engine instance groups, each in the same project but in a
different region. Use the first instance group to serve traffic, and use the HTTP load balancing service to
fail over to the standby instance group in case of a disaster.
D. Deploy the application on two Compute Engine instance groups, each in separate project and a
different region. Use the first instance group to server traffic, and use the HTTP load balancing service to
fail over to the standby instance in case of a disaster.
Answer: (SHOW ANSWER)

NEW QUESTION: 74
For this question, refer to the Dress4Win case study.
Dress4Win has end-to-end tests covering 100% of their endpoints. They want to ensure that the move to
the cloud does not introduce any new bugs. Which additional testing methods should the developers
employ to prevent an outage?
A. They should add additional unit tests and production scale load tests on their cloud staging
environment.
B. They should enable Google Stackdriver Debugger on the application code to show errors in the code.
C. They should add canary tests so developers can measure how much of an impact the new release
causes to latency.
D. They should run the end-to-end tests in the cloud staging environment to determine if the code is
working as intended.
Answer: A (LEAVE A REPLY)

NEW QUESTION: 75
You are building a continuous deployment pipeline for a project stored in a Git source repository and
want to ensure that code changes can be verified deploying to production. What should you do?
A. Use Spinnaker to deploy builds to production using the red/black deployment strategy so that changes
can easily be rolled back.
B. Use Spinnaker to deploy builds to production and run tests on production deployments.
C. Use Jenkins to build the staging branches and the master branch. Build and deploy changes to
production for 10% of users before doing a complete rollout.
D. Use Jenkins to monitor tags in the repository. Deploy staging tags to a staging environment for
testing.
Answer: (SHOW ANSWER)
After testing, tag the repository for production and deploy that to the production environment.
Reference:
README.md

NEW QUESTION: 76
For this question, refer to the TerramEarth case study. Considering the technical requirements, how
should you reduce the unplanned vehicle downtime in GCP?
A. Use Cloud Dataproc Hive as the data warehouse. Upload gzip files to a MultiRegional Cloud Storage
bucket. Upload this data into BigQuery using gcloud. Use Google data Studio for analysis and reporting.
B. Use Cloud Dataproc Hive as the data warehouse. Directly stream data into prtitioned Hive tables. Use
Pig scripts to analyze data.
C. Use BigQuery as the data warehouse. Connect all vehicles to the network and stream data into
BigQuery using Cloud Pub/Sub and Cloud Dataflow. Use Google Data Studio for analysis and reporting.
D. Use BigQuery as the data warehouse. Connect all vehicles to the network and upload gzip files to a
Multi-Regional Cloud Storage bucket using gcloud. Use Google Data Studio for analysis and reporting.
Answer: C (LEAVE A REPLY)

Valid Professional-Cloud-Architect Dumps shared by Fast2test.com for Helping Passing


Professional-Cloud-Architect Exam! Fast2test.com now offer the newest Professional-Cloud-
Architect exam dumps, the Fast2test.com Professional-Cloud-Architect exam questions have been
updated and answers have been corrected get the newest Fast2test.com Professional-Cloud-
Architect dumps with Test Engine here: https://www.fast2test.com/Professional-Cloud-Architect-
premium-file.html (251 Q&As Dumps, 30%OFF Special Discount: freecram)

NEW QUESTION: 77
Your company wants to track whether someone is present in a meeting room reserved for a scheduled
meeting. There are 1000 meeting rooms across 5 offices on 3 continents. Each room is equipped with a
motion sensor that reports its status every second. The data from the motion detector includes only a
sensor ID and several different discrete items of information. Analysts will use this data, together with
information about account owners and office locations. Which database type should you use?
A. Flat file
B. NoSQL
C. Relational
D. Blobstore
Answer: (SHOW ANSWER)
Relational databases were not designed to cope with the scale and agility challenges that face modern
applications, nor were they built to take advantage of the commodity storage and processing power
available today.
NoSQL fits well for:
Incorrect Answers:
D: The Blobstore API allows your application to serve data objects, called blobs, that are much larger
than the size allowed for objects in the Datastore service. Blobs are useful for serving large files, such as
video or image files, and for allowing users to upload large data files.
References: https://www.mongodb.com/nosql-explained

NEW QUESTION: 78
Your company's test suite is a custom C++ application that runs tests throughout each day on Linux
virtual machines. The full test suite takes several hours to complete, running on a limited number of on
premises servers reserved for testing. Your company wants to move the testing infrastructure to the
cloud, to reduce the amount of time it takes to fully test a change to the system, while changing the tests
as little as possible. Which cloud infrastructure should you recommend?
A. Google Compute Engine unmanaged instance groups and Network Load Balancer
B. Google Compute Engine managed instance groups with auto-scaling
C. Google Cloud Dataproc to run Apache Hadoop jobs to process each test
D. Google App Engine with Google Stackdriver for logging
Answer: (SHOW ANSWER)
https://cloud.google.com/compute/docs/instance-groups/
Google Compute Engine enables users to launch virtual machines (VMs) on demand. VMs can be
launched from the standard images or custom images created by users.
Managed instance groups offer autoscaling capabilities that allow you to automatically add or remove
instances from a managed instance group based on increases or decreases in load. Autoscaling helps
your applications gracefully handle increases in traffic and reduces cost when the need for resources is
lower.

NEW QUESTION: 79
Your marketing department wants to send out a promotional email campaign. The development team
wants to minimize direct operation management. They project a wide range of possible customer
responses, from 100 to 500,000 click-throughs per day. The link leads to a simple website that explains
the promotion and collects user information and preferences. Which infrastructure should you
recommend? (CHOOSE TWO)
A. Use Google App Engine to serve the website and Google Cloud Datastore to store user data.
B. Use a Google Container Engine cluster to serve the website and store data to persistent disk.
C. Use a managed instance group to serve the website and Google Cloud Bigtable to store user data.
D. Use a single compute Engine virtual machine (VM) to host a web server, backed by Google Cloud
SQL.
Answer: (SHOW ANSWER)
Reference:

References: https://cloud.google.com/storage-options/

NEW QUESTION: 80
A. Have the vehicle' computer compress the data in hourly snapshots, and store it in a Google Cloud
storage (GCS) Nearline bucket.
B. Push the telemetry data in Real-time to a streaming dataflow job that compresses the data, and store
it in Google BigQuery.
C. Push the telemetry data in real-time to a streaming dataflow job that compresses the data, and store it
in Cloud Bigtable.
D. Have the vehicle's computer compress the data in hourly snapshots, a Store it in a GCS Coldline
bucket.
Answer: (SHOW ANSWER)
Coldline Storage is the best choice for data that you plan to access at most once a year, due to its
slightly lower availability, 90-day minimum storage duration, costs for data access, and higher per-
operation costs. For example:
Cold Data Storage - Infrequently accessed data, such as data stored for legal or regulatory reasons, can
be stored at low cost as Coldline Storage, and be available when you need it.
Disaster recovery - In the event of a disaster recovery event, recovery time is key. Cloud Storage
provides low latency access to data stored as Coldline Storage.
References: https://cloud.google.com/storage/docs/storage-classes

NEW QUESTION: 81
Your company has a stateless web API that performs scientific calculations. The web API runs on a
single Google Kubernetes Engine (GKE) cluster. The cluster is currently deployed in us-central1. Your
company has expanded to offer your API to customers in Asi a. You want to reduce the latency for the
users in Asia. What should you do?
A. Use a global HTTP(s) load balancer with Cloud CDN enabled
B. Create a second GKE cluster in asia-southeast1, and expose both API's using a Service of type Load
Balancer. Add the public Ips to the Cloud DNS zone
C. Increase the memory and CPU allocated to the application in the cluster
D. Create a second GKE cluster in asia-southeast1, and use kubemci to create a global HTTP(s) load
balancer
Answer: (SHOW ANSWER)
https://cloud.google.com/kubernetes-engine/docs/concepts/multi-cluster-ingress#how_works
https://github.com/GoogleCloudPlatform/k8s-multicluster-ingress
https://cloud.google.com/blog/products/gcp/how-to-deploy-geographically-distributed-services-on-
kubernetes-engine-with-kubemci

NEW QUESTION: 82
You are migrating third-party applications from optimized on-premises virtual machines to Google Cloud.
You are unsure about the optimum CPU and memory options. The application have a consistent usage
patterns across multiple weeks. You want to optimize resource usage for the lowest cost. What should
you do?
A. Create a Compute engine instance with CPU and Memory options similar to your application's current
on-premises virtual machine. Install the cloud monitoring agent, and deploy the third party application.
Run a load with normal traffic levels on third party application and follow the Rightsizing
Recommendations in the Cloud Console
B. Create an App Engine flexible environment, and deploy the third party application using a Docker file
and a custom runtime. Set CPU and memory options similar to your application's current on-premises
virtual machine in the app.yaml file.
C. Create an instance template with the smallest available machine type, and use an image of the third
party application taken from the current on-premises virtual machine. Create a managed instance group
that uses average CPU to autoscale the number of instances in the group. Modify the average CPU
utilization threshold to optimize the number of instances running.
D. Create multiple Compute Engine instances with varying CPU and memory options. Install the cloud
monitoring agent and deploy the third-party application on each of them. Run a load test with high traffic
levels on the application and use the results to determine the optimal settings.
Answer: (SHOW ANSWER)
Create a Compute engine instance with CPU and Memory options similar to your application's current
on-premises virtual machine. Install the cloud monitoring agent, and deploy the third party application.
Run a load with normal traffic levels on third party application and follow the Rightsizing
Recommendations in the Cloud Console
https://cloud.google.com/migrate/compute-engine/docs/4.9/concepts/planning-a-migration/cloud-
instance-rightsizing?hl=en

NEW QUESTION: 83
You created a pipeline that can deploy your source code changes to your infrastructure in instance
groups for self healing.
One of the changes negatively affects your key performance indicator.
You are not sure how to fix it and investigation could take up to a week.
What should you do
A. Revert the source code change and rerun the deployment pipeline
B. Log into the servers with the bad code change, and swap in the previous code
C. Log in to a server, and iterate a fix locally
D. Change the instance group template to the previous one, and delete all instances.
Answer: (SHOW ANSWER)

NEW QUESTION: 84
For this question, refer to the Helicopter Racing League (HRL) case study. The HRL development team
releases a new version of their predictive capability application every Tuesday evening at 3 a.m. UTC to
a repository. The security team at HRL has developed an in-house penetration test Cloud Function
called Airwolf.
The security team wants to run Airwolf against the predictive capability application as soon as it is
released every Tuesday. You need to set up Airwolf to run at the recurring weekly cadence. What should
you do?
A. Set up a Cloud Logging sink and a Cloud Storage bucket that triggers a Cloud Function.
B. Set up Identity and Access Management (IAM) and Confidential Computing to trigger a Cloud
Function.
C. Set up Cloud Tasks and a Cloud Storage bucket that triggers a Cloud Function.
D. Configure the deployment job to notify a Pub/Sub queue that triggers a Cloud Function.
Answer: (SHOW ANSWER)

NEW QUESTION: 85
You have an outage in your Compute Engine managed instance group: all instance keep restarting after
5 seconds. You have a health check configured, but autoscaling is disabled. Your colleague, who is a
Linux expert, offered to look into the issue. You need to make sure that he can access the VMs. What
should you do?
A. Grant your colleague the IAM role of project Viewer
B. Perform a rolling restart on the instance group
C. Disable the health check for the instance group. Add his SSH key to the project-wide SSH keys
D. Disable autoscaling for the instance group. Add his SSH key to the project-wide SSH Keys
Answer: (SHOW ANSWER)
https://cloud.google.com/compute/docs/instance-groups/autohealing-instances-in-migs Health checks
used for autohealing should be conservative so they don't preemptively delete and recreate your
instances. When an autohealer health check is too aggressive, the autohealer might mistake busy
instances for failed instances and unnecessarily restart them, reducing availability

NEW QUESTION: 86
For this question, refer to the Dress4Win case study.
As part of Dress4Win's plans to migrate to the cloud, they want to be able to set up a managed logging
and monitoring system so they can handle spikes in their traffic load. They want to ensure that:
* The infrastructure can be notified when it needs to scale up and down to handle the ebb and flow of
usage throughout the day
* Their administrators are notified automatically when their application reports errors.
* They can filter their aggregated logs down in order to debug one piece of the application across many
hosts Which Google StackDriver features should they use?
A. Monitoring, Trace, Debug, Logging
B. Monitoring, Logging, Alerts, Error Reporting
C. Monitoring, Logging, Debug, Error Report
D. Logging, Alerts, Insights, Debug
Answer: (SHOW ANSWER)

NEW QUESTION: 87
You team needs to create a Google Kubernetes Engine (GKE) cluster to host a newly built application
that requires access to third-party services on the internet. Your company does not allow any Compute
Engine instance to have a public IP address on Google Cloud. You need to create a deployment strategy
that adheres to these guidelines. What should you do?
A. Create a Compute Engine instance, and install a NAT Proxy on the instance. Configure all workloads
on GKE to pass through this proxy to access third-party services on the Internet
B. Configure the GKE cluster as a private cluster, and configure Cloud NAT Gateway for the cluster
subnet
C. Configure the GKE cluster as a route-based cluster. Configure Private Google Access on the Virtual
Private Cloud (VPC)
D. Configure the GKE cluster as a private cluster. Configure Private Google Access on the Virtual Private
Cloud (VPC)
Answer: (SHOW ANSWER)
A Cloud NAT gateway can perform NAT for nodes and Pods in a private cluster, which is a type of VPC-
native cluster. The Cloud NAT gateway must be configured to apply to at least the following subnet IP
address ranges for the subnet that your cluster uses:
Subnet primary IP address range (used by nodes)
Subnet secondary IP address range used for Pods in the cluster
Subnet secondary IP address range used for Services in the cluster
The simplest way to provide NAT for an entire private cluster is to configure a Cloud NAT gateway to
apply to all of the cluster's subnet's IP address ranges.
https://cloud.google.com/nat/docs/overview

Valid Professional-Cloud-Architect Dumps shared by Fast2test.com for Helping Passing


Professional-Cloud-Architect Exam! Fast2test.com now offer the newest Professional-Cloud-
Architect exam dumps, the Fast2test.com Professional-Cloud-Architect exam questions have been
updated and answers have been corrected get the newest Fast2test.com Professional-Cloud-
Architect dumps with Test Engine here: https://www.fast2test.com/Professional-Cloud-Architect-
premium-file.html (251 Q&As Dumps, 30%OFF Special Discount: freecram)

You might also like