Ee Draindumps 2023-Jul-01 by Bancroft 316q Vce
Ee Draindumps 2023-Jul-01 by Bancroft 316q Vce
Ee Draindumps 2023-Jul-01 by Bancroft 316q Vce
https://www.exambible.com/SAA-C03-exam/ (0 Q&As)
Amazon-Web-Services
Exam Questions SAA-C03
AWS Certified Solutions Architect - Associate (SAA-C03)
About Exambible
Found in 1998
Exambible is a company specialized on providing high quality IT exam practice study materials, especially Cisco CCNA, CCDA,
CCNP, CCIE, Checkpoint CCSE, CompTIA A+, Network+ certification practice exams and so on. We guarantee that the
candidates will not only pass any IT exam at the first attempt but also get profound understanding about the certificates they have
got. There are so many alike companies in this industry, however, Exambible has its unique advantages that other companies could
not achieve.
Our Advances
* 99.9% Uptime
All examinations will be up to date.
* 24/7 Quality Support
We will provide service round the clock.
* 100% Pass Rate
Our guarantee that you will pass the exam.
* Unique Gurantee
If you do not pass the exam at the first time, we will not only arrange FULL REFUND for you, but also provide you another
exam of your claim, ABSOLUTELY FREE!
NEW QUESTION 1
A company has two applications: a sender application that sends messages with payloads to be processed and a processing application intended to receive the
messages with payloads. The company wants to implement an AWS service to handle messages between the two applications. The sender application can send
about 1.000 messages each hour. The messages may take up to 2 days to be processed. If the messages fail to process, they must be retained so that they do
not impact the processing of any remaining messages.
Which solution meets these requirements and is the MOST operationally efficient?
Answer: C
Explanation:
Explanation
https://aws.amazon.com/blogs/compute/building-loosely-coupled-scalable-c-applications-with-amazon-sqs-and-
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.htm
NEW QUESTION 2
A company runs an on-premises application that is powered by a MySQL database The company is migrating the application to AWS to Increase the application's
elasticity and availability
The current architecture shows heavy read activity on the database during times of normal operation Every 4 hours the company's development team pulls a full
export of the production database to populate a database in the staging environment During this period, users experience unacceptable application latency The
development team is unable to use the staging environment until the procedure completes
A solutions architect must recommend replacement architecture that alleviates the application latency issue The replacement architecture also must give the
development team the ability to continue using the staging environment without delay
Which solution meets these requirements?
A. Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for productio
B. Populate the staging database by implementing a backup and restore process that uses the mysqldump utility.
C. Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for production Use database cloning to create the staging database on-demand
D. Use Amazon RDS for MySQL with a Mufti AZ deployment and read replicas for production Use the standby instance tor the staging database.
E. Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas for productio
F. Populate the staging database by implementing a backup and restore process that uses the mysqldump utility.
Answer: B
NEW QUESTION 3
A company has created an image analysis application in which users can upload photos and add photo frames to their images. The users upload images and
metadata to indicate which photo frames they want to add to their images. The application uses a single Amazon EC2 instance and Amazon DynamoDB to store
the metadata.
The application is becoming more popular, and the number of users is increasing. The company expects the number of concurrent users to vary significantly
depending on the time of day and day of week. The company must ensure that the application can scale to meet the needs of the growing user base.
Which solution meats these requirements?
Answer: A
NEW QUESTION 4
An application runs on an Amazon EC2 instance in a VPC. The application processes logs that are stored in an Amazon S3 bucket. The EC2 instance needs to
access the S3 bucket without connectivity to the internet.
Which solution will provide private network connectivity to Amazon S3?
Answer: A
NEW QUESTION 5
A company is migrating applications to AWS. The applications are deployed in different accounts. The company manages the accounts centrally by using AWS
Organizations. The company's security team needs a single sign-on (SSO) solution across all the company's accounts. The company must continue managing the
A. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO consol
B. Create a one-way forest trust or a one-way domain trust to connect the company's self-managed Microsoft Active Directory with AWS SSO by using AWS
Directory Service for Microsoft Active Directory.
C. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO consol
D. Create a two-way forest trust to connect the company's self-managed Microsoft Active Directory with AWS SSO by using AWS Directory Service for Microsoft
Active Directory.
E. Use AWS Directory Servic
F. Create a two-way trust relationship with the company's self-managed Microsoft Active Directory.
G. Deploy an identity provider (IdP) on premise
H. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console.
Answer: A
NEW QUESTION 6
A company runs its two-tier ecommerce website on AWS. The web tier consists of a load balancer that sends traffic to Amazon EC2 instances. The database tier
uses an Amazon RDS DB instance. The EC2 instances and the RDS DB instance should not be exposed to the public internet. The EC2 instances require internet
access to complete payment processing of orders through a third-party web service. The application must be highly available.
Which combination of configuration options will meet these requirements? (Choose two.)
A. Use an Auto Scaling group to launch the EC2 instances in private subnet
B. Deploy an RDS Multi-AZ DB instance in private subnets.
C. Configure a VPC with two private subnets and two NAT gateways across two Availability Zones.Deploy an Application Load Balancer in the private subnets.
D. Use an Auto Scaling group to launch the EC2 instances in public subnets across two Availability Zones.Deploy an RDS Multi-AZ DB instance in private subnets.
E. Configure a VPC with one public subnet, one private subnet, and two NAT gateways across two Availability Zone
F. Deploy an Application Load Balancer in the public subnet.
G. Configure a VPC with two public subnets, two private subnets, and two NAT gateways across two Availability Zone
H. Deploy an Application Load Balancer in the public subnets.
Answer: AE
Explanation:
Explanation
Before you begin: Decide which two Availability Zones you will use for your EC2 instances. Configure your
virtual private cloud (VPC) with at least one public subnet in each of these Availability Zones. These public subnets are used to configure the load balancer. You
can launch your EC2 instances in other subnets of these Availability Zones instead.
NEW QUESTION 7
A company maintains a searchable repository of items on its website. The data is stored in an Amazon RDS for MySQL database table that contains more than 10
million rows The database has 2 TB of General Purpose SSD storage There are millions of updates against this data every day through the company's website
The company has noticed that some insert operations are taking 10 seconds or longer The company has determined that the database storage performance is the
problem
Which solution addresses this performance issue?
Answer: A
Explanation:
Explanation
https://aws.amazon.com/ebs/features/
"Provisioned IOPS volumes are backed by solid-state drives (SSDs) and are the highest performance EBS volumes designed for your critical, I/O intensive
database applications. These volumes are ideal for both IOPS-intensive and throughput-intensive workloads that require extremely low latency."
NEW QUESTION 8
A company wants to migrate its on-premises data center to AWS. According to the company's compliance requirements, the company can use only the ap-
northeast-3 Region. Company administrators are not permitted to connect VPCs to the internet.
Which solutions will meet these requirements? (Choose two.)
A. Use AWS Control Tower to implement data residency guardrails to deny internet access and deny access to all AWS Regions except ap-northeast-3.
B. Use rules in AWS WAF to prevent internet acces
C. Deny access to all AWS Regions except ap-northeast-3 in the AWS account settings.
D. Use AWS Organizations to configure service control policies (SCPS) that prevent VPCs from gaining internet acces
E. Deny access to all AWS Regions except ap-northeast-3.
F. Create an outbound rule for the network ACL in each VPC to deny all traffic from 0.0.0.0/0. Create an IAM policy for each user to prevent the use of any AWS
Region other than ap-northeast-3.
G. Use AWS Config to activate managed rules to detect and alert for internet gateways and to detect and alert for new resources deployed outside of ap-
northeast-3.
Answer: AC
NEW QUESTION 9
A company hosts a marketing website in an on-premises data center. The website consists of static documents and runs on a single server. An administrator
updates the website content infrequently and uses an SFTP client to upload new documents.
The company decides to host its website on AWS and to use Amazon CloudFront. The company's solutions architect creates a CloudFront distribution. The
solutions architect must design the most cost-effective and resilient architecture for website hosting to serve as the CloudFront origin.
Which solution will meet these requirements?
Answer: D
NEW QUESTION 10
A company wants to manage Amazon Machine Images (AMIs). The company currently copies AMIs to the same AWS Region where the AMIs were created. The
company needs to design an application that captures AWS API calls and sends alerts whenever the Amazon EC2 Createlmage API operation is called within the
company's account.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create an AWS Lambda function to query AWS CloudTrail logs and to send an alert when a Createlmage API call is detected.
B. Configure AWS CloudTrail with an Amazon Simple Notification Service {Amazon SNS) notification that occurs when updated logs are sent to Amazon S3. Use
Amazon Athena to create a new table and to query on Createlmage when an API call is detected.
C. Create an Amazon EventBridge (Amazon CloudWatch Events) rule for the Createlmage API call.Configure the target as an Amazon Simple Notification Service
(Amazon SNS) topic to send an alert when a Createlmage API call is detected.
D. Configure an Amazon Simple Queue Service (Amazon SQS) FIFO queue as a target for AWS CloudTrail log
E. Create an AWS Lambda function to send an alert to an Amazon Simple NotificationService (Amazon SNS) topic when a Createlmage API call is detected.
Answer: B
NEW QUESTION 10
A company is implementing a shared storage solution for a media application that is hosted m the AWS Cloud The company needs the ability to use SMB clients to
access data The solution must he fully managed.
Which AWS solution meets these requirements?
Answer: D
NEW QUESTION 15
A company collects temperature, humidity, and atmospheric pressure data in cities across multiple
continents. The average volume of data collected per site each day is 500 GB. Each site has a highspeed
internet connection. The company's weather forecasting applications are based in a single Region and analyze the data daily.
What is the FASTEST way to aggregate data from all of these global sites?
Answer: A
Explanation:
Explanation
You might want to use Transfer Acceleration on a bucket for various reasons, including the following:
You have customers that upload to a centralized bucket from all over the world.
You transfer gigabytes to terabytes of data on a regular basis across continents.
You are unable to utilize all of your available bandwidth over the Internet when uploading to Amazon
S3.
https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html
https://aws.amazon.com/s3/transferacceleration/#:~:text=S3%20Transfer%20Acceleration%20(S3TA)%20reduces,to%20S3%20for%20remote%20applications:
"Amazon S3 Transfer Acceleration can speed up content transfers to and from Amazon S3 by as much
as 50-500% for long-distance transfer of larger objects. Customers who have either web or mobile
applications with widespread users or applications hosted far away from their S3 bucket can experience long and variable upload and download speeds over the
Internet"
https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html
NEW QUESTION 19
The management account has an Amazon S3 bucket that contains project reports. The company
wants to limit access to this S3 bucket to only users of accounts within the organization in AWS
Organizations.
Which solution meets these requirements with the LEAST amount of operational overhead?
A. Add the aws:PrincipalOrgID global condition key with a reference to the organization ID to the S3bucket policy.
B. Create an organizational unit (OU) for each departmen
C. Add the aws:PrincipalOrgPaths globalcondition key to the S3 bucket policy.
D. Use AWS CloudTrail to monitor the CreateAccount, InviteAccountToOrganization,LeaveOrganization, and RemoveAccountFromOrganization event
E. Update the S3 bucket policyaccordingly.
F. Tag each user that needs access to the S3 bucke
G. Add the aws:PrincipalTag global condition key tothe S3 bucket policy.
Answer: A
Explanation:
Explanation
https://aws.amazon.com/blogs/security/control-access-to-aws-resources-by-using-the-awsorganization-
of-iam-principals/
The aws:PrincipalOrgID global key provides an alternative to listing all the account IDs for all AWS
accounts in an organization. For example, the following Amazon S3 bucket policy allows members of
any account in the XXX organization to add an object into the examtopics bucket.
{"Version": "2020-09-10",
"Statement": {
"Sid": "AllowPutObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::examtopics/*",
"Condition": {"StringEquals":
{"aws:PrincipalOrgID":["XXX"]}}}}
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html
NEW QUESTION 22
A company is hosting a web application on AWS using a single Amazon EC2 instance that stores useruploaded documents in an Amazon EBS volume. For better
scalability and availability, the company
duplicated the architecture and created a second EC2 instance and EBS volume in another Availability
Zone placing both behind an Application Load Balancer After completing this change, users reported
that, each time they refreshed the website, they could see one subset of their documents or the
other, but never all of the documents at the same time.
What should a solutions architect propose to ensure users see all of their documents at once?
A. Copy the data so both EBS volumes contain all the documents.
B. Configure the Application Load Balancer to direct a user to the server with the documents
C. Copy the data from both EBS volumes to Amazon EFS Modify the application to save newdocuments to Amazon EFS
D. Configure the Application Load Balancer to send the request to both servers Return eachdocument from the correct server.
Answer: C
Explanation:
Explanation
Amazon EFS provides file storage in the AWS Cloud. With Amazon EFS, you can create a file system,
mount the file system on an Amazon EC2 instance, and then read and write data to and from your file
system. You can mount an Amazon EFS file system in your VPC, through the Network File System
versions 4.0 and 4.1 (NFSv4) protocol. We recommend using a current generation Linux NFSv4.1 client, such as those found in the latest Amazon Linux, Redhat,
and Ubuntu
AMIs, in conjunction with the Amazon EFS Mount Helper. For instructions, see Using the amazon-efsutils
Tools.
For a list of Amazon EC2 Linux Amazon Machine Images (AMIs) that support this protocol, see NFS
Support. For some AMIs, you'll need to install an NFS client to mount your file system on your
Amazon EC2 instance. For instructions, see Installing the NFS Client.
You can access your Amazon EFS file system concurrently from multiple NFS clients, so applications
that scale beyond a single connection can access a file system. Amazon EC2 instances running in
multiple Availability Zones within the same AWS Region can access the file system, so that many
users can access and share a common data source.
NEW QUESTION 23
A company observes an increase in Amazon EC2 costs in its most recent bill
The billing team notices unwanted vertical scaling of instance types for a couple of EC2 instances
A solutions architect needs to create a graph comparing the last 2 months of EC2 costs and perform an in-depth analysis to identify the root cause of the vertical
scaling
How should the solutions architect generate the information with the LEAST operational overhead?
A. Use AWS Budgets to create a budget report and compare EC2 costs based on instance types
B. Use Cost Explorer's granular filtering feature to perform an in-depth analysis of EC2 costs based on instance types
C. Use graphs from the AWS Billing and Cost Management dashboard to compare EC2 costs based on instance types for the last 2 months
D. Use AWS Cost and Usage Reports to create a report and send it to an Amazon S3 bucket Use Amazon QuickSight with Amazon S3 as a source to generate an
Answer: B
Explanation:
Explanation
AWS Cost Explorer is a tool that enables you to view and analyze your costs and usage. You can explore your usage and costs using the main graph, the Cost
Explorer cost and usage reports, or the Cost Explorer RI reports. You can view data for up to the last 12 months, forecast how much you're likely to spend for the
next 12 months, and get recommendations for what Reserved Instances to purchase. You can use Cost Explorer to identify areas that need further inquiry and see
trends that you can use to understand your costs. https://docs.aws.amazon.com/costmanagement/ latest/userguide/ce-what-is.html
NEW QUESTION 27
A company is designing an application. The application uses an AWS Lambda function to receive information through Amazon API Gateway and to store the
information in an Amazon Aurora PostgreSQL database.
During the proof-of-concept stage, the company has to increase the Lambda quotas significantly to handle the high volumes of data that the company needs to
load into the database. A solutions architect must recommend a new design to improve scalability and minimize the configuration effort.
Which solution will meet these requirements?
A. Refactor the Lambda function code to Apache Tomcat code that runs on Amazon EC2 instances.Connect the database by using native Java Database
Connectivity (JDBC) drivers.
B. Change the platform from Aurora to Amazon DynamoD
C. Provision a DynamoDB Accelerator (DAX) cluste
D. Use the DAX client SDK to point the existing DynamoDB API calls at the DAX cluster.
E. Set up two Lambda function
F. Configure one function to receive the informatio
G. Configure the other function to load the information into the databas
H. Integrate the Lambda functions by using Amazon Simple Notification Service (Amazon SNS).
I. Set up two Lambda function
J. Configure one function to receive the informatio
K. Configure the other function to load the information into the databas
L. Integrate the Lambda functions by using an Amazon Simple Queue Service (Amazon SQS) queue.
Answer: D
Explanation:
Explanation
bottlenecks can be avoided with queues (SQS).
NEW QUESTION 31
A company needs to review its AWS Cloud deployment to ensure that its Amazon S3 buckets do not have unauthorized configuration changes.
What should a solutions architect do to accomplish this goal?
Answer: A
NEW QUESTION 34
A company is launching a new application and will display application metrics on an Amazon CloudWatch dashboard. The company’s product manager needs to
access this dashboard periodically. The product manager does not have an AWS account. A solution architect must provide access to the product manager by
following the principle of least privilege.
Which solution will meet these requirements?
Answer: A
NEW QUESTION 35
A company provides a Voice over Internet Protocol (VoIP) service that uses UDP connections. The service consists of Amazon EC2 instances that run in an Auto
Scaling group. The company has deployments across multiple AWS Regions.
The company needs to route users to the Region with the lowest latency. The company also needs automated failover between Regions.
Which solution will meet these requirements?
Answer: C
NEW QUESTION 38
A development team needs to host a website that will be accessed by other teams. The website contents consist of HTML, CSS, client-side JavaScript, and
images Which method is the MOST costeffective for hosting the website?
Answer: B
Explanation:
Explanation
In Static Websites, Web pages are returned by the server which are prebuilt.
They use simple languages such as HTML, CSS, or JavaScript.
There is no processing of content on the server (according to the user) in Static Websites. Web pages are returned by the server with no change therefore, static
Websites are fast.
There is no interaction with databases.
Also, they are less costly as the host does not need to support server-side processing with different languages.
============
In Dynamic Websites, Web pages are returned by the server which are processed during runtime means they are not prebuilt web pages but they are built during
runtime according to the user’s demand.
These use server-side scripting languages such as PHP, Node.js, ASP.NET and many more supported by the server.
So, they are slower than static websites but updates and interaction with databases are possible.
NEW QUESTION 40
A company runs an online marketplace web application on AWS. The application serves hundreds of thousands of users during peak hours. The company needs a
scalable, near-real-time solution to share the details of millions of financial transactions with several other internal applications Transactions also need to be
processed to remove sensitive data before being stored in a document database for low-latency retrieval.
What should a solutions architect recommend to meet these requirements?
A. Store the transactions data into Amazon DynamoDB Set up a rule in DynamoDB to remove sensitive data from every transaction upon write Use DynamoDB
Streams to share the transactions data with other applications
B. Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3 Use AWS Lambda integration with
Kinesis Data Firehose to remove sensitive dat
C. Other applications can consumethe data stored in Amazon S3
D. Stream the transactions data into Amazon Kinesis Data Streams Use AWS Lambda integration to remove sensitive data from every transaction and then store
the transactions data in Amazon DynamoDB Other applications can consumethe transactions data off the Kinesis data stream.
E. Store the batched transactions data in Amazon S3 as file
F. Use AWS Lambda to process every file and remove sensitive data before updating the files in Amazon S3 The Lambda function then stores the data in Amazon
DynamoDBOther applications can consume transaction files stored in Amazon S3.
Answer: C
Explanation:
Explanation
The destination of your Kinesis Data Firehose delivery stream. Kinesis Data Firehose can send data records to various destinations, including Amazon Simple
Storage Service (Amazon S3), Amazon Redshift, Amazon OpenSearch Service,
and any HTTP endpoint that is owned by you or any of your third-party service providers. The following are the supported destinations:
* Amazon OpenSearch Service
* Amazon S3
* Datadog
* Dynatrace
* Honeycomb
* HTTP Endpoint
* Logic Monitor
* MongoDB Cloud
* New Relic
* Splunk
* Sumo Logic
https://docs.aws.amazon.com/firehose/latest/dev/create-name.html
https://aws.amazon.com/kinesis/data-streams/
Amazon Kinesis Data Streams (KDS) is a massively scalable and durable real-time data streaming service. KDS can continuously capture gigabytes of data per
second from hundreds of thousands of sources such as website clickstreams, database event streams, financial transactions, social media feeds, IT logs, and
location-tracking events.
NEW QUESTION 41
A company is preparing to launch a public-facing web application in the AWS Cloud. The architecture consists of Amazon EC2 instances within a VPC behind an
Elastic Load Balancer (ELB). A third-party service is used for the DNS. The company's solutions architect must recommend a solution to detect and protect against
large-scale DDoS attacks.
Which solution meets these requirements?
Answer: D
NEW QUESTION 45
A company runs a highly available image-processing application on Amazon EC2 instances in a single VPC The EC2 instances run inside several subnets across
multiple Availability Zones. The EC2 instances do not communicate with each other However, the EC2 instances download images from Amazon S3 and upload
images to Amazon S3 through a single NAT gateway The company is concerned about data transfer charges What is the MOST cost-effective way for the
company to avoid Regional data transfer charges?
Answer: C
NEW QUESTION 48
A company has an on-premises application that generates a large amount of time-sensitive data that is backed up to Amazon S3. The application has grown and
there are user complaints about internet bandwidth limitations. A solutions architect needs to design a long-term solution that allows for both timely backups to
Amazon S3 and with minimal impact on internet connectivity for internal users.
Which solution meets these requirements?
A. Establish AWS VPN connections and proxy all traffic through a VPC gateway endpoint
B. Establish a new AWS Direct Connect connection and direct backup traffic through this new connection.
C. Order daily AWS Snowball devices Load the data onto the Snowball devices and return the devices to AWS each day.
D. Submit a support ticket through the AWS Management Console Request the removal of S3 service limits from the account.
Answer: B
NEW QUESTION 51
A company has an Amazon S3 bucket that contains critical dat a. The company must protect the data from accidental deletion.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)
Answer: AB
NEW QUESTION 52
A company has a data ingestion workflow that consists the following:
An Amazon Simple Notification Service (Amazon SNS) topic for notifications about new data deliveries An AWS Lambda function to process the data and record
metadata The company observes that the ingestion workflow fails occasionally because of network connectivity issues. When such a failure occurs, the Lambda
function does not ingest the corresponding data unless the company manually reruns the job.
Which combination of actions should a solutions architect take to ensure that the Lambda function ingests all data in the future? (Select TWO.)
Answer: BE
NEW QUESTION 54
A company has an application that provides marketing services to stores. The services are based on previous purchases by store customers. The stores upload
transaction data to the company through SFTP, and the data is processed and analyzed to generate new marketing offers. Some of the files can exceed 200 GB in
size.
Recently, the company discovered that some of the stores have uploaded files that contain personally identifiable information (PII) that should not have been
included. The company wants administrators to be alerted if PII is shared again.
The company also wants to automate remediation.
What should a solutions architect do to meet these requirements with the LEAST development effort?
Answer: B
NEW QUESTION 59
A company has a production workload that runs on 1,000 Amazon EC2 Linux instances. The workload is powered by third-party software. The company needs to
patch the third-party software on all EC2 instances as quickly as possible to remediate a critical security vulnerability.
What should a solutions architect do to meet these requirements?
A. Create an AWS Lambda function to apply the patch to all EC2 instances.
B. Configure AWS Systems Manager Patch Manager to apply the patch to all EC2 instances.
C. Schedule an AWS Systems Manager maintenance window to apply the patch to all EC2 instances.
D. Use AWS Systems Manager Run Command to run a custom command that applies the patch to all EC2 instances.
Answer: D
NEW QUESTION 62
A company wants to migrate its on-premises application to AWS. The application produces output files that vary in size from tens of gigabytes to hundreds of
terabytes The application data must be stored in a standard file system structure
The company wants a solution that scales automatically, is highly available, and requires minimum operational overhead.
Which solution will meet these requirements?
A. Migrate the application to run as containers on Amazon Elastic Container Service (Amazon ECS) Use Amazon S3 for storage
B. Migrate the application to run as containers on Amazon Elastic Kubernetes Service (Amazon EKS) Use Amazon Elastic Block Store (Amazon EBS) for storage
C. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling grou
D. Use Amazon Elastic File System (Amazon EFS) for storage.
E. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling grou
F. Use Amazon Elastic Block Store (Amazon EBS) for storage.
Answer: C
NEW QUESTION 66
A company is preparing to store confidential data in Amazon S3 For compliance reasons the data must be encrypted at rest Encryption key usage must be logged
tor auditing purposes. Keys must be rotated every year.
Which solution meets these requirements and «the MOST operationally efferent?
Answer: D
Explanation:
https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html
When you enable automatic key rotation for a customer managed key, AWS KMS generates new cryptographic material for the KMS key every year. AWS KMS
also saves the KMS key's older cryptographic material in perpetuity so it can be used to decrypt data that the KMS key encrypted.
Key rotation in AWS KMS is a cryptographic best practice that is designed to be transparent and easy to use.
AWS KMS supports optional automatic key rotation only for customer managed CMKs. Enable and disable key rotation. Automatic key rotation is disabled by
default on customer managed CMKs. When you enable (or re-enable) key rotation, AWS KMS automatically rotates the CMK 365 days after the enable date and
every 365 days thereafter.
NEW QUESTION 69
A company has more than 5 TB of file data on Windows file servers that run on premises Users and applications interact with the data each day
The company is moving its Windows workloads to AWS. As the company continues this process, the company requires access to AWS and on-premises file
storage with minimum latency The company needs a solution that minimizes operational overhead and requires no significant changes to the existing file access
patterns. The company uses an AWS Site-to-Site VPN connection for connectivity to AWS
What should a solutions architect do to meet these requirements?
Answer: D
NEW QUESTION 71
A solutions architect is designing the cloud architecture for a new application being deployed on AWS. The process should run in parallel while adding and
removing application nodes as needed based on the number of fobs to be processed. The processor application is stateless. The solutions architect must ensure
that the application is loosely copied and the job items are durably stored
Which design should the solutions architect use?
A. Create an Amazon SNS topic to send the jobs that need to be processed Create an Amazon Machine Image (AMI) that consists of the processor application
Create a launch configuration that uses the AMI Create an Auto Scaling group using the launch configuration Set the scaling policy for the Auto Scaling group to
add and remove nodes based on CPU usage
B. Create an Amazon SQS queue to hold the jobs that need to be processed Create an Amazon Machine image (AMI) that consists of the processor application
Create a launch configuration that uses the AM' Create an Auto Scaling group using the launch configuration Set the scaling policy for the Auto Scaling group to
add and remove nodes based on network usage
C. Create an Amazon SQS queue to hold the jobs that needs to be processed Create an Amazon Machine image (AMI) that consists of the processor application
Create a launch template that uses the AMI Create an Auto Scaling group using the launch template Set the scaling policy for the Auto Scaling group to add and
remove nodes based on the number of items in the SQS queue
D. Create an Amazon SNS topic to send the jobs that need to be processed Create an Amazon Machine Image (AMI) that consists of the processor application
Create a launch template that uses the AMI Create an Auto Scaling group using the launch template Set the scaling policy for the Auto Scaling group to add and
remove nodes based on the number of messages published to the SNS topic
Answer: C
Explanation:
"Create an Amazon SQS queue to hold the jobs that needs to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the
scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue"
In this case we need to find a durable and loosely coupled solution for storing jobs. Amazon SQS is ideal for this use case and can be configured to use dynamic
scaling based on the number of jobs waiting in the queue.To configure this scaling you can use the backlog per instance metric with the target value being the
acceptable backlog per instance to maintain. You can calculate these numbers as follows: Backlog per instance: To calculate your backlog per instance, start with
the ApproximateNumberOfMessages queue attribute to determine the length of the SQS queue
NEW QUESTION 73
A company is running a high performance computing (HPC) workload on AWS across many Linux based Amazon EC2 instances. The company needs a shared
storage system that is capable of sub-millisecond latencies, hundreds of Gbps of throughput and millions of IOPS. Users will store millions of small files.
Which solution meets these requirements?
A. Create an Amazon Elastic File System (Amazon EFS) file system Mount me file system on each of the EC2 instances
B. Create an Amazon S3 bucket Mount the S3 bucket on each of the EC2 instances
C. Ensure that the EC2 instances ate Amazon Elastic Block Store (Amazon EBS) optimized Mount Provisioned lOPS SSD (io2) EBS volumes with Multi-Attach on
each instance
D. Create an Amazon FSx for Lustre file syste
E. Mount the file system on each of the EC2 instances
Answer: D
NEW QUESTION 78
A rapidly growing ecommerce company is running its workloads in a single AWS Region. A solutions architect must create a disaster recovery (DR) strategy that
includes a different AWS Region. The company wants its database to be up to date in the DR Region with the least possible latency. The remaining infrastructure
in the DR Region needs to run at reduced capacity and must be able to scale up if necessary.
Which solution will meet these requirements with the LOWEST recovery time objective (RTO)?
Answer: B
NEW QUESTION 82
A company is expecting rapid growth in the near future. A solutions architect needs to configure existing users and grant permissions to new users on AWS The
solutions architect has decided to create IAM groups The solutions architect will add the new users to IAM groups based on department
Which additional action is the MOST secure way to grant permissions to the new users?
Answer: C
NEW QUESTION 85
A company hosts its product information webpages on AWS The existing solution uses multiple Amazon EC2 instances behind an Application Load Balancer in an
Auto Scaling group. The website also uses a custom DNS name and communicates with HTTPS only using a dedicated SSL certificate The company is planning a
new product launch and wants to be sure that users from around the world have the best possible experience on the new website
What should a solutions architect do to meet these requirements?
Answer: A
Explanation:
as CloudFront can help provide the best experience for global users. CloudFront integrates seamlessly with ALB and provides and option to use custom DNS and
SSL certs.
NEW QUESTION 90
A company is planning to build a high performance computing (HPC) workload as a service solution that Is hosted on AWS A group of 16 AmazonEC2Ltnux
Instances requires the lowest possible latency for
node-to-node communication. The instances also need a shared block device volume for high-performing
storage.
Which solution will meet these requirements?
Answer: A
NEW QUESTION 94
A company wants to use the AWS Cloud to make an existing application highly available and resilient. The current version of the application resides in the
company's data center. The application recently experienced data loss after a database server crashed because of an unexpected power outage.
The company needs a solution that avoids any single points of failure. The solution must give the application the ability to scale to meet user demand.
Which solution will meet these requirements?
A. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zone
B. Use an Amazon RDS DB instance in a Multi-AZ configuration.
C. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group in a single Availability Zon
D. Deploy the database on an EC2 instanc
E. Enable EC2 Auto Recovery.
F. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zone
G. Use an Amazon RDS DB instance with a read replica in a single Availability Zon
H. Promote the read replica to replace the primary DB instance if the primary DB instance fails.
I. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones Deploy the primary and secondary
database servers on EC2 instances across multiple Availability Zones Use Amazon Elastic Block Store (Amazon EBS) Multi-Attach to create shared storage
between the instances.
Answer: A
NEW QUESTION 96
A company is migrating its on-premises PostgreSQL database to Amazon Aurora PostgreSQL. The
on-premises database must remain online and accessible during the migration. The Aurora database must remain synchronized with the on-premises database.
Which combination of actions must a solutions architect take to meet these requirements? (Select TWO.)
Answer: CD
Answer: CD
A. Use native MySQL tools to migrate the database to Amazon RDS for MySQL Configure elastic storage scaling
B. Migrate the database to Amazon Redshift by using the mysqldump utility Turn on Auto Scaling for the Amazon Redshift cluster
C. Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon Aurora Turn on Aurora Auto Scaling.
D. Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon DynamoDB Configure an Auto Scaling policy.
Answer: C
A. The EC2 Instance has a resource-based policy win a Deny statement.B The principal has not been specified in the policy statement
B. The "Action" field does not grant the actions that are required to terminate the EC2 instance
C. The request to terminate the EC2 instance does not originate from the CIDR blocks 192 0 2.0:24 or 203.0.113.0/24.
Answer: B
A. Set up a DynamoDB Accelerator (DAX) cluster Route all read requests through DAX.
B. Set up Amazon ElastiCache for Redis between the DynamoDB table and the web application Route all read requests through Redis.
C. Set up Amazon ElastrCachertor Memcached between the DynamoDB table and the web application Route all read requests through Memcached.
D. Set up Amazon DynamoDB streams on the table and have AWS Lambda read from the table andpopulate Amazon ElastiCache Route all read requests through
ElastiCache
Answer: A
A. Point the client driver at an RDS custom endpoint Deploy the Lambda functions inside a VPC
B. Point the client driver at an RDS proxy endpoint Deploy the Lambda functions inside a VPC
C. Point the client driver at an RDS custom endpoint Deploy the Lambda functions outside a VPC
D. Point the client driver at an RDS proxy endpoint Deploy the Lambda functions outside a VPC
Answer: B
A. Provision a subnet in each Availability Zone Configure the Auto Scaling group to distribute the EC2 instances across bothAvailability Zones Configure the DB
instance with connections to each network
B. Provision two subnets that extend across both Availability Zones Configure the Auto Scaling group to distribute the EC2 instancesacross both Availability Zones
Configure the DB instance with connections to each network
C. Provision a subnet in each Availability Zone Configure the Auto Scaling group to distribute the EC2 instances across both Availability Zones Configure the DB
instance for Multi-AZ deployment
D. Provision a subnet that extends across both Availability Zones Configure the Auto Scaling group to distribute the EC2 instancesacross both Availability Zones
Configure the DB instance for Multi-AZ deployment
Answer: C
Answer: C
A. Use the aws ec2 register-image command to create an AMI from a snapshot Use AWS Step Functions to replace the AMI in the Auto Scaling group
B. Enable Amazon Elastic Block Store (Amazon EBS) fast snapshot restore on a snapshot Provision an AMI by using the snapshot Replace the AMI m the Auto
Scaling group with the new AMI
C. Enable AMI creation and define lifecycle rules in Amazon Data Lifecycle Manager (Amazon DLM) Create an AWS Lambda function that modifies the AMI in the
Auto Scaling group
D. Use Amazon EventBridge (Amazon CloudWatch Events) to invoke AWS Backup lifecycle policies that provision AMIs Configure Auto Scaling group capacity
limits as an event source in EventBridge (CloudWatch Events)
Answer: B
A. Configure the "web-servers* security group (o allow access lo the OB instance's current IP addresses Configure the "database" security group to allow access
from the current set of IP addresses in use by the EC? instances
B. Configure the "web-servers" security group to allow access to the "database" security group Configure the "database" security group to allow access from the
"web-servers" security group
C. Configure the "web-servers" security group to allow access to the DB instance's current IP addresses Configure the "database" security group to allow access
from the Auto Scaling group
D. Configure the "web servers" security group to allow access to the "database" security group Configure the "database" security group to allow access from the
Auto Scaling group
Answer: C
Answer: BE
A. Create an Amazon EMR cluster with Apache Spark installed Write a Spark application to transform the data Use EMR File System (EMRFS) to write files to the
transformed data bucket
B. Create an AWS Glue crawler to discover the data Create an AWS Glue extract transform: and load (ETL) job to transform the data Specify the transformed data
bucket in the output step
C. Use AWS Batch to create a job definition with Bash syntax to transform the data and output the data to the transformed data bucketUse the job definition to
submit a job Specify an array job as the job type
D. Create an AWS Lambda function to transform the data and output the data to the transformed data bucke
E. Configure an event notification for the S3 bucke
F. Specify the Lambda function as the destination for the event notification.
Answer: D
A. Deploy EC2 instances In an additional Region Create a DB instance with the Multi-AZ option activated
B. Deploy all EC2 instances in the same Region and the same Availability Zon
C. Create a DB instance with the Multi-AZ option activated.
D. Deploy the fcC2 instances across at least two Availability Zones within the some Regio
E. Create a DB instance in a single Availability Zone
F. Deploy the EC2 instances across at least Two Availability Zones within the same Regio
G. Create a DB instance with the Multi-AZ option activated
Answer: D
A. Create a usage plan with an API key that it shared with genuine users only.
B. Integrate logic within the Lambda function to ignore the requests lion- fraudulent IP addresses
C. Implement an AWS WAF rule to target malicious requests and trigger actions to filler them out
D. Convert the existing public API to a private API Update the DNS records to redirect users to the new API endpoint
E. Create an IAM role tor each user attempting to access the API A user will assume the role when making the API call
Answer: CD
A. Configure an AWS Lambda function m each developer account to copy the log files to the central account Create an IAM role in the central account for the
auditor Attach an IAM policy providing read-only permissions to the bucket
B. Configure CloudTrail from each developer account to deliver the log files to an S3 bucket m the central account Create an IAM user in the central account for
the auditor Attach an IAM policy providing full permissions to the bucket
C. Configure CloudTrail from each developer account to deliver the log files to an S3 bucket in the central account Create an IAM role in the central account for the
auditor Attach an IAM policy providingread-only permissions to the bucket
D. Configure an AWS Lambda function in the central account to copy the log files from the S3 bucket m each developer account Create an IAM user m the central
account for the auditor Attach an IAM policy providing full permissions to the bucket
Answer: C
Explanation:
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-sharing-logs.html
A. group in the private subnets and associate it with an Application Load Balancer Configure a Network Load Balancer in the public subnet
B. Configure the Auto Scaling
C. group in the public subnets and associate it with an Application Load Balancer.
D. Configure an Application Load Balancer in the public subnet
E. Configure the Auto Scaling group in the private subnets and associate it with the Application Load
F. Balancer, Configure an Application Load Balancer in the private subnet
G. Configure the Auto Scaling group in the private subnets and associate it with the Application Load Balancer.
Answer: C
A. Attach a VPC endpoint policy for DynamoDB to allow write access to only the specific DynamoDB tables.
B. Attach a security group to the interface VPC endpoint to allow write access to only the specific DynamoDB tables.
C. Create a resource-based 1AM policy to grant write access to only the specific DynamoDB table
D. Attach the policy to the DynamoDB tables.
E. Create a gateway VPC endpoint for DynamoDB that is associated with the Lambda VP
F. Ensure that the Lambda execution role can access the gateway VPC endpoint.
G. Create an interface VPC endpoint for DynamoDB that is associated with the Lambda VP
H. Ensure that the Lambda execution role can access the interface VPC endpoint.
Answer: AD
immediately and sends a message back to the device if necessary No data is stored.
The company needs a solution that minimizes latency for the data transmission from the devices. The solution also must provide rapid failover to another AWS
Region
Which solution will meet these requirements?
A. Configure an Amazon Route 53 failover routing policy Create a Network Load Balancer (NLB) in each of the two Regions Configure the NLB to invoke an AWS
Lambda function to process the data
B. Use AWS Global Accelerator Create a Network Load Balancer (NLB) in each of the two Regions as an endpoin
C. Create an Amazon Elastic Container Service (Amazon ECS) cluster with the Fargate launch type Create an ECS service on the cluster Set the ECS service as
the target for the NLB Process the data in Amazon ECS.
D. Use AWS Global Accelerator Create an Application Load Balancer (ALB) in each of the two Regions as an endpoint Create an Amazon Elastic Container
Service (Amazon ECS) cluster with the Fargate launch type Create an ECS service on the cluste
E. Set the ECS service as the target for the ALB Process the data in Amazon ECS
F. Configure an Amazon Route 53 failover routing policy Create an Application Load Balancer (ALB) in each of the two Regions Create an Amazon Elastic
Container Service (Amazon ECS) cluster with the Fargate launch type Create an ECS service on the cluster Set the ECS service as the target for the ALB Process
the data in Amazon ECS
Answer: C
Answer: A
A. Use AWS DataSync to move the data Create a custom transformation job by using AWS Glue
B. Order an AWS Snowcone device to move the data Deploy the transformation application to the device
C. Order an AWS Snowball Edge Storage Optimized devic
D. Copy the data to the devic
E. Create a customtransformation job by using AWS Glue
F. Order an AWS
G. Snowball Edge Storage Optimized device that includes Amazon EC2 compute Copy the data to the device Create a new EC2 instance on AWS to run the
transformation application
Answer: D
A. Have the deployment engineer use AWS account roof user credentials for performing AWS CloudFormation stack operations.
B. Create a new IAM user for the deployment engineer and add the IAM user to a group that has thePowerUsers IAM policy attached.
C. Create a new IAM user for the deployment engineer and add the IAM user to a group that has the Administrate/Access IAM policy attached.
D. Create a new IAM User for the deployment engineer and add the IAM user to a group that has an IAM policy that allows AWS CloudFormation actions only.
E. Create an IAM role for the deployment engineer to explicitly define the permissions specific to the AWS CloudFormation stack and launch stacks using Dial IAM
role.
Answer: DE
Explanation:
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html
D. Use AWS DataSync to synchronize the database data across multiple EC2 instances
E. Create an Application Load Balancer to distribute traffic to an Auto Scaling group or EC2 instances that are distributed across two Availability Zones.
Answer: BE
Answer: C
Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html#Overview.Encryption.
Answer: B
Answer: D
A. Create an Amazon CloudWatch alarm to scale up the EC2 instances when CPU utilization exceeds 90%.
B. Create a recurring scheduled action to scale up the Auto Scaling group before the expected period of peak demand.
C. increase the minimum and maximum number of EC2 instances in the Auto Scaling group during the peak demand period
D. Configure an Amazon Simple Notification Service (Amazon SNS) notification to send alerts when there are autoscaling EC2_INSTANCE_LAUNCH events
Answer: B
A. Use Amazon Kinesis Data Streams to ingest the data Process the data using AWS Lambda function.
B. Use Amazon API Gateway on top of the existing applicatio
C. Create a usage plan with a quota limit for the third-party vendor
D. Use Amazon Simple Notification Service (Amazon SNS) to ingest the data Put the EC2 instances in an Auto Scaling group behind an Application Load Balancer
E. Repackage the application as a container Deploy the application using Amazon Elastic Container Service (Amazon ECS) using the EC2 launch type with an
Auto Scaling group
Answer: A
Answer: B
A. Create an AWS Lambda function to query AWS CloudTrail logs and to send an alert when a Createlmage API call is detected
B. Configure AWS CloudTrail with an Amazon Simple Notification Sen/ice (Amazon SNS) notification that occurs when updated logs are sent to Amazon S3 Use
Amazon Athena to create a new table and to query on Createlmage when an API call is detected
C. Create an Amazon EventBndge (Amazon CloudWatch Events) rule for the Createlmage API call Configure the target as an Amazon Simple Notification Service
(Amazon SNS) topic to send an alert when a Createlmage API call is detected
D. Configure an Amazon Simple Queue Service (Amazon SQS) FIFO queue as a target for AWS CloudTrail logs Create an AWS Lambda function to send an alert
to an Amazon Simple Notification Service (Amazon SNS) topic when a Createlmage API call is detected
Answer: B
Answer: C
Explanation:
https://docs.aws.amazon.com/vpn/latest/s2svpn/images/Multiple_Gateways_diagram.png
"To protect against a loss of connectivity in case your customer gateway device becomes unavailable, you can set up a second Site-to-Site VPN connection to
your VPC and virtual private gateway by using a second customer gateway device." https://docs.aws.amazon.com/vpn/latest/s2svpn/vpn-redundant-
connection.html
Answer: B
A. Create an S3 bucket Create an 1AM role that has permissions to write to the S3 bucke
B. Use the AWS CLI to copy all files locally to the S3 bucket.
C. Create an AWS Snowball Edge jo
D. Receive a Snowball Edge device on premise
E. Use the Snowball Edge client to transfer data to the devic
F. Return the device so that AWS can import the data intoAmazon S3.
G. Deploy an S3 File Gateway on premise
H. Create a public service endpoint to connect to the S3 File Gateway Create an S3 bucket Create a new NFS file share on the S3 File Gateway Point the new file
share to the S3 bucke
I. Transfer the data from the existing NFS file share to the S3 File Gateway.
J. Set up an AWS Direct Connect connection between the on-premises network and AW
K. Deploy an S3 File Gateway on premise
L. Create a public virtual interlace (VIF) to connect to the S3 File Gatewa
M. Create an S3 bucke
N. Create a new NFS file share on the S3 File Gatewa
O. Point the new file share to the S3 bucke
P. Transfer the data from the existing NFS file share to the S3 File Gateway.
Answer: C
A. Deploy Amazon API Gateway into a public subnet and adjust the route table to route S3 calls through it
B. Deploy a NAT gateway into a public subnet and attach an endpoint policy that allows access to the S3 buckets
C. Deploy the application into a public subnet and allow it to route through an internet gateway to access the S3 buckets
D. Deploy an S3 VPC gateway endpoint into the VPC and attach an endpoint policy that allows access to the S3 buckets
Answer: D
A. Deploy the application with the required infrastructure elements in place Use Amazon Route 53 to configure active-passive failover Create an Aurora Replica in
a second AWS Region
B. Host a scaled-down deployment of the application in a second AWS Region Use Amazon Route 53 to configure active-active failover Create an Aurora Replica
in the second Region
C. Replicate the primary infrastructure in a second AWS Region Use Amazon Route 53 to configure active-active failover Create an Aurora database that is
restored from the latest snapshot
D. Back up data with AWS Backup Use the backup to create the required infrastructure in a second AWS Region Use Amazon Route 53 to configure active-
passive failover Create an Aurora second primary instance in the second Region
Answer: C
Answer: CD
A. Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0.
B. Configure the security group for the web tier to allow outbound traffic on port 443 from 0.0.0.0/0.
C. Configure the security group for the database tier to allow inbound traffic on port 1433 from the security group for the web tier.
D. Configure the security group for the database tier to allow outbound traffic on ports 443 and 1433 to the security group for the web tier.
E. Configure the security group for the database tier to allow inbound traffic on ports 443 and 1433 from the security group for the web tier.
Answer: AC
Explanation:
"Security groups create an outbound rule for every inbound rule." Not completely right. Statefull does NOT mean that if you create an inbound (or outbound) rule, it
will create an outbound (or inbound) rule. What it does mean is: suppose you create an inbound rule on port 443 for the X ip. When a request enters on port 443
from X ip, it will allow traffic out for that request in the port 443. However, if you look at the outbound rules, there will not be any outbound rule on port 443 unless
explicitly create it. In ACLs, which are stateless, you would have to create an inbound rule to allow incoming requests and an outbound rule to allow your
application responds to those incoming requests.
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#SecurityGroupRules
A. Use Amazon S3 to host the full website in different S3 buckets Add Amazon CloudFront distributions Set the S3 buckets as origins for the distributions Store the
order data in Amazon S3
B. Deploy the full website on Amazon EC2 instances that run in Auto Scaling groups across multiple Availability Zones Add an Application Load Balancer (ALB) to
distribute the website traffic Add another ALB for the backend APIs Store the data in Amazon RDS for MySQL
C. Migrate the full application to run in containers Host the containers on Amazon Elastic Kubernetes Service (Amazon EKS) Use the Kubernetes Cluster
Autoscaler to increase and decrease the number of pods to process bursts in traffic Store the data in Amazon RDS for MySQL
D. Use an Amazon S3 bucket to host the website's static content Deploy an Amazon CloudFront distributio
E. Set the S3 bucket as the origin Use Amazon API Gateway and AWS Lambda functions for the backend APIs Store the data in Amazon DynamoDB
Answer: D
A. Provision an Amazon RDS proxy to sit between the Lambda functions and the database Configure the Lambda functions to connect to the RDS proxy
B. Increase the run time of me Lambda functions to the maximum Create a retry mechanism in the code that stores the customer data in the database
C. Persist the customer data to Lambda local storag
D. Configure new Lambda functions to scan the local storage to save the customer data to the database.
E. Store the customer data m an Amazon Simple Queue Service (Amazon SOS) FIFO queue Create a new Lambda function that polls the queue and stores the
customer data in the database
Answer: C
Answer: C
A. S3 Standard
B. S3 Intelligent-Tiering
C. S3 Standard-Infrequent Access {S3 Standard-IA)
D. S3 One Zone-Infrequent Access (S3 One Zone-IA)
Answer: B
A. Create a site-to-site VPN tunnel to an Amazon S3 bucket and transfer the files directl
B. Create a bucket policy to enforce a VPC endpoint
C. Order 10 AWS Snowball appliances and select an S3 Glacier vault as the destinatio
D. Create a bucket policy to enforce a VPC endpoint
E. Mount the network-attached file system to Amazon S3 and copy the files directl
F. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier
G. Order 10 AWS Snowball appliances and select an Amazon S3 bucket as the destinatio
H. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier
Answer: D
Answer: AD
A. Create a security group for the web servers and allow port 443 from 0.0.0.0/0. Create a security group (or the MySQL servers and allow port 3306 from the web
servers security group.
B. Create a network ACL for the web servers and allow port 443 from 0.0.0.0/0. Create a network ACL for the MySQL servers and allow port 3306 from the web
servers security group.
C. Create a security group for the web servers and allow port 443 from the load balance
D. Create a security group for the MySQL servers and allow port 3306 from the web servers security group.
E. Create a network ACL for the web servers and allow port 443 from the load balance
F. Create a network ACL for the MySQL servers and allow port 3306 from the web servers security group.
Answer: C
Answer: B
A. Create Amazon Elastic Block Store (Amazon EBS) volumes In the same Availability Zones where EKS worker nodes are place
B. Register the volumes In a StorageClass object on an EKS cluster Use EBS Multi-Attach to share the data between containers
C. Create an Amazon Elastic File System (Amazon EFS) tile system Register the tile system in a StorageClass object on an EKS cluster Use the same file system
for all containers
D. Create an Amazon Elastic Block Store (Amazon EBS) volume Register the volume In a StorageClass object on an EKS cluster Use the same volume for all
containers.
E. Create Amazon Elastic File System (Amazon EFS) file systems In the same Availability Zones where EKS worker nodes are placed Register the file systems in
a StorageClass obied on an EKS duster Create an AWS Lambda function to synchronize the data between file systems
Answer: B
A. Place the instances in a public subnet Use Amazon S3 for storage Access S3 objects by using URLs
B. Place the instances in a private subnet use Amazon S3 for storage Use a VPC endpoint to access S3 objects
C. Use the instances with a Provisioned IOPS SSD (io2) Amazon Elastic Block Store (Amazon EBS) volume.
D. Use Amazon Elastic File System (Amazon EPS) Standard-Infrequent Access (Standard-IA) to store data and provide shared access to the instances
Answer: B
A. Create a database snapshot Copy the snapshot to a new unencrypted snapshot Share the new snapshot with the acquiring company's AWS account
B. Create a database snapshot Add the acquiring company's AWS account to the KMS key policy Share the snapshot with the acquiring company's AWS account
C. Create a database snapshot that uses a different AWS managed KMS key Add the acquiring company's AWS account to the KMS key alia
D. Share the snapshot with the acquiring company's AWS account.
E. Create a database snapshot Download the database snapshot Upload the database snapshot to an Amazon S3 bucket Update the S3 bucket policy to allow
access from the acquiring company's AWS account
Answer: A
Relate Links
https://www.exambible.com/SAA-C03-exam/
Contact us
We are proud of our high-quality customer service, which serves you around the clock 24/7.
Viste - https://www.exambible.com/