ExamTopicsSAA C03 2 6
ExamTopicsSAA C03 2 6
ExamTopicsSAA C03 2 6
A company is moving its data management application to AWS. The company wants to transition to an event-driven architecture. The architecture
needs to be more distributed and to use serverless concepts while performing the different aspects of the workflow. The company also wants to
minimize operational overhead.
A. Build out the workflow in AWS Glue. Use AWS Glue to invoke AWS Lambda functions to process the workflow steps.
B. Build out the workflow in AWS Step Functions. Deploy the application on Amazon EC2 instances. Use Step Functions to invoke the workflow
steps on the EC2 instances.
C. Build out the workflow in Amazon EventBridge. Use EventBridge to invoke AWS Lambda functions on a schedule to process the workflow
steps.
D. Build out the workflow in AWS Step Functions. Use Step Functions to create a state machine. Use the state machine to invoke AWS Lambda
functions to process the workflow steps.
Correct Answer: D
A company is designing the network for an online multi-player game. The game uses the UDP networking protocol and will be deployed in eight
AWS Regions. The network architecture needs to minimize latency and packet loss to give end users a high-quality gaming experience.
A. Setup a transit gateway in each Region. Create inter-Region peering attachments between each transit gateway.
B. Set up AWS Global Accelerator with UDP listeners and endpoint groups in each Region.
C. Set up Amazon CloudFront with UDP turned on. Configure an origin in each Region.
D. Set up a VPC peering mesh between each Region. Turn on UDP for each VPC.
Correct Answer: B
A: AWS Global Accelerator is a networking service that helps you improve the availability and performance of the applications that you
offer to your global users. AWS Global Accelerator is easy to set up, configure, and manage. It provides static IP addresses that provide a
fixed entry point to your applications and eliminate the complexity of managing specific IP addresses for different AWS Regions and
Availability Zones. AWS Global Accelerator always routes user traffic to the optimal endpoint based on performance, reacting instantly to
changes in application health, your user’s location, and policies that you configure. You can test the performance benefits from your
location with a speed comparison tool. Like other AWS services, AWS Global Accelerator is a self-service, pay-per-use offering, requiring no
long term commitments or minimum fees.
https://aws.amazon.com/global-accelerator/faqs/
upvoted 4 times
A company hosts a three-tier web application on Amazon EC2 instances in a single Availability Zone. The web application uses a self-managed
MySQL database that is hosted on an EC2 instance to store data in an Amazon Elastic Block Store (Amazon EBS) volume. The MySQL database
currently uses a 1 TB Provisioned IOPS SSD (io2) EBS volume. The company expects traffic of 1,000 IOPS for both reads and writes at peak traffic.
The company wants to minimize any disruptions, stabilize performance, and reduce costs while retaining the capacity for double the IOPS. The
company wants to move the database tier to a fully managed solution that is highly available and fault tolerant.
A. Use a Multi-AZ deployment of an Amazon RDS for MySQL DB instance with an io2 Block Express EBS volume.
B. Use a Multi-AZ deployment of an Amazon RDS for MySQL DB instance with a General Purpose SSD (gp2) EBS volume.
D. Use two large EC2 instances to host the database in active-passive mode.
Correct Answer: B
RDS does not support IO2 or IO2express . GP2 can do the required IOPS
upvoted 1 times
Amazon RDS for MySQL provides automated backups, software patching, and automatic host replacement. It also provides Multi-AZ
deployments that automatically replicate data to a standby instance in another Availability Zone. This ensures that data is always available
even in the event of a failure.
upvoted 1 times
Amazon RDS provides three storage types: General Purpose SSD (also known as gp2 and gp3), Provisioned IOPS SSD (also known as io1),
and magnetic (also known as standard). They differ in performance characteristics and price, which means that you can tailor your storage
performance and cost to the needs of your database workload. You can create MySQL, MariaDB, Oracle, and PostgreSQL RDS DB instances
with up to 64 tebibytes (TiB) of storage. You can create SQL Server RDS DB instances with up to 16 TiB of storage. For this amount of
storage, use the Provisioned IOPS SSD and General Purpose SSD storage types.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html
upvoted 1 times
from "https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html"
upvoted 1 times
A company hosts a serverless application on AWS. The application uses Amazon API Gateway, AWS Lambda, and an Amazon RDS for PostgreSQL
database. The company notices an increase in application errors that result from database connection timeouts during times of peak traffic or
unpredictable traffic. The company needs a solution that reduces the application failures with the least amount of change to the code.
Correct Answer: B
https://aws.amazon.com/rds/proxy/
upvoted 3 times
A company is migrating an old application to AWS. The application runs a batch job every hour and is CPU intensive. The batch job takes 15
minutes on average with an on-premises server. The server has 64 virtual CPU (vCPU) and 512 GiB of memory.
Which solution will run the batch job within 15 minutes with the LEAST operational overhead?
B. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate.
Correct Answer: A
Therefore, the solution that will run the batch job within 15 minutes with the LEAST operational overhead is D. Use AWS Batch on Amazon
EC2. AWS Batch can handle all the operational aspects of job scheduling, instance management, and scaling while using Amazon EC2
injavascript:void(0)stances with the right amount of CPU and memory resources to meet the job's requirements.
upvoted 13 times
AWS Batch can easily schedule and run batch jobs on EC2 instances. It can scale up to the required vCPUs and memory to match the on-
premises server.
Using EC2 provides full control over the instance type to meet the resource needs.
No servers or clusters to manage like with ECS/Fargate or Lightsail. AWS Batch handles this automatically.
More cost effective and operationally simple compared to Lambda which is not ideal for long running batch jobs.
upvoted 2 times
A company stores its data objects in Amazon S3 Standard storage. A solutions architect has found that 75% of the data is rarely accessed after
30 days. The company needs all the data to remain immediately accessible with the same high availability and resiliency, but the company wants
to minimize storage costs.
B. Move the data objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days.
C. Move the data objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days.
D. Move the data objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) immediately.
Correct Answer: B
S3 Standard-IA is a storage class that is designed for infrequently accessed data. It offers lower storage costs than S3 Standard, but it has
a retrieval latency of 1-5 minutes.
upvoted 1 times
https://aws.amazon.com/s3/storage-classes/#:~:text=S3%20One%20Zone%2DIA%20is,less%20than%20S3%20Standard%2DIA.
upvoted 1 times
A gaming company is moving its public scoreboard from a data center to the AWS Cloud. The company uses Amazon EC2 Windows Server
instances behind an Application Load Balancer to host its dynamic application. The company needs a highly available storage solution for the
application. The application consists of static files and dynamic server-side code.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)
A. Store the static files on Amazon S3. Use Amazon CloudFront to cache objects at the edge.
B. Store the static files on Amazon S3. Use Amazon ElastiCache to cache objects at the edge.
C. Store the server-side code on Amazon Elastic File System (Amazon EFS). Mount the EFS volume on each EC2 instance to share the files.
D. Store the server-side code on Amazon FSx for Windows File Server. Mount the FSx for Windows File Server volume on each EC2 instance to
share the files.
E. Store the server-side code on a General Purpose SSD (gp2) Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBS volume on
each EC2 instance to share the files.
Correct Answer: AD
Storing static files in S3 with CloudFront provides durability, high availability, and low latency by caching at edge locations.
FSx for Windows File Server provides a fully managed Windows native file system that can be accessed from the Windows EC2 instances to
share server-side code. It is designed for high availability and scales up to 10s of GBPS throughput.
EFS and EBS volumes can be attached to a single AZ. FSx and S3 are replicated across AZs for high availability.
upvoted 1 times
https://www.techtarget.com/searchaws/tip/Amazon-FSx-vs-EFS-Compare-the-AWS-file-services
"FSx is built for high performance and submillisecond latency using solid-state drive storage volumes. This design enables users to select
storage capacity and latency independently. Thus, even a subterabyte file system can have 256 Mbps or higher throughput and support
volumes up to 64 TB."
upvoted 3 times
A social media company runs its application on Amazon EC2 instances behind an Application Load Balancer (ALB). The ALB is the origin for an
Amazon CloudFront distribution. The application has more than a billion images stored in an Amazon S3 bucket and processes thousands of
images each second. The company wants to resize the images dynamically and serve appropriate formats to clients.
Which solution will meet these requirements with the LEAST operational overhead?
A. Install an external image management library on an EC2 instance. Use the image management library to process the images.
B. Create a CloudFront origin request policy. Use the policy to automatically resize images and to serve the appropriate format based on the
User-Agent HTTP header in the request.
C. Use a Lambda@Edge function with an external image management library. Associate the Lambda@Edge function with the CloudFront
behaviors that serve the images.
D. Create a CloudFront response headers policy. Use the policy to automatically resize images and to serve the appropriate format based on
the User-Agent HTTP header in the request.
Correct Answer: D
Using a Lambda@Edge function with an external image management library is the best solution to resize the images dynamically and
serve appropriate formats to clients. Lambda@Edge is a serverless computing service that allows running custom code in response to
CloudFront events, such as viewer requests and origin requests. By using a Lambda@Edge function, it's possible to process images on the
fly and modify the CloudFront response before it's sent back to the client. Additionally, Lambda@Edge has built-in support for external
libraries that can be used to process images. This approach will reduce operational overhead and scale automatically with traffic.
upvoted 10 times
A Lambda@Edge function is a serverless function that runs at the edge of the CloudFront network. This means that the function is
executed close to the user, which can improve performance.
An external image management library can be used to resize images and to serve the appropriate format.
Associating the Lambda@Edge function with the CloudFront behaviors that serve the images ensures that the function is executed for all
requests that are served by those behaviors.
upvoted 1 times
A hospital needs to store patient records in an Amazon S3 bucket. The hospital’s compliance team must ensure that all protected health
information (PHI) is encrypted in transit and at rest. The compliance team must administer the encryption key for data at rest.
A. Create a public SSL/TLS certificate in AWS Certificate Manager (ACM). Associate the certificate with Amazon S3. Configure default
encryption for each S3 bucket to use server-side encryption with AWS KMS keys (SSE-KMS). Assign the compliance team to manage the KMS
keys.
B. Use the aws:SecureTransport condition on S3 bucket policies to allow only encrypted connections over HTTPS (TLS). Configure default
encryption for each S3 bucket to use server-side encryption with S3 managed encryption keys (SSE-S3). Assign the compliance team to
manage the SSE-S3 keys.
C. Use the aws:SecureTransport condition on S3 bucket policies to allow only encrypted connections over HTTPS (TLS). Configure default
encryption for each S3 bucket to use server-side encryption with AWS KMS keys (SSE-KMS). Assign the compliance team to manage the KMS
keys.
D. Use the aws:SecureTransport condition on S3 bucket policies to allow only encrypted connections over HTTPS (TLS). Use Amazon Macie to
protect the sensitive data that is stored in Amazon S3. Assign the compliance team to manage Macie.
Correct Answer: C
Also, SSE-S3 encryption is fully managed by AWS so the Compliance Team can't administer this.
upvoted 1 times
Explanation:
The compliance team needs to administer the encryption key for data at rest in order to ensure that protected health information (PHI) is
encrypted in transit and at rest. Therefore, we need to use server-side encryption with AWS KMS keys (SSE-KMS). The default encryption
for each S3 bucket can be configured to use SSE-KMS to ensure that all new objects in the bucket are encrypted with KMS keys.
Additionally, we can configure the S3 bucket policies to allow only encrypted connections over HTTPS (TLS) using the aws:SecureTransport
condition. This ensures that the data is encrypted in transit.
upvoted 1 times
Also, SSE-S3 encryption is fully managed by AWS so the Compliance Team can't administer this.
upvoted 2 times
Macie protects personal record such as PHI. Macie provides you with an inventory of your S3 buckets, and automatically evaluates and
monitors the buckets for security and access control. If Macie detects a potential issue with the security or privacy of your data, such as a
bucket that becomes publicly accessible, Macie generates a finding for you to review and remediate as necessary.
upvoted 3 times
A company uses Amazon API Gateway to run a private gateway with two REST APIs in the same VPC. The BuyStock RESTful web service calls the
CheckFunds RESTful web service to ensure that enough funds are available before a stock can be purchased. The company has noticed in the VPC
flow logs that the BuyStock RESTful web service calls the CheckFunds RESTful web service over the internet instead of through the VPC. A
solutions architect must implement a solution so that the APIs communicate through the VPC.
Which solution will meet these requirements with the FEWEST changes to the code?
D. Add an Amazon Simple Queue Service (Amazon SQS) queue between the two REST APIs.
Correct Answer: A
A company hosts a multiplayer gaming application on AWS. The company wants the application to read data with sub-millisecond latency and run
one-time queries on historical data.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon RDS for data that is frequently accessed. Run a periodic custom script to export the data to an Amazon S3 bucket.
B. Store the data directly in an Amazon S3 bucket. Implement an S3 Lifecycle policy to move older data to S3 Glacier Deep Archive for long-
term storage. Run one-time queries on the data in Amazon S3 by using Amazon Athena.
C. Use Amazon DynamoDB with DynamoDB Accelerator (DAX) for data that is frequently accessed. Export the data to an Amazon S3 bucket by
using DynamoDB table export. Run one-time queries on the data in Amazon S3 by using Amazon Athena.
D. Use Amazon DynamoDB for data that is frequently accessed. Turn on streaming to Amazon Kinesis Data Streams. Use Amazon Kinesis
Data Firehose to read the data from Kinesis Data Streams. Store the records in an Amazon S3 bucket.
Correct Answer: B
A company uses a payment processing system that requires messages for a particular payment ID to be received in the same order that they were
sent. Otherwise, the payments might be processed incorrectly.
Which actions should a solutions architect take to meet this requirement? (Choose two.)
A. Write the messages to an Amazon DynamoDB table with the payment ID as the partition key.
B. Write the messages to an Amazon Kinesis data stream with the payment ID as the partition key.
C. Write the messages to an Amazon ElastiCache for Memcached cluster with the payment ID as the key.
D. Write the messages to an Amazon Simple Queue Service (Amazon SQS) queue. Set the message attribute to use the payment ID.
E. Write the messages to an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Set the message group to use the payment ID.
Correct Answer: BD
On the other hand, Amazon DynamoDB is a NoSQL database service that provides fast and predictable performance with seamless
scalability. While it can store data with partition keys, it does not guarantee the order of records within a partition, which is essential for
the given use case. Hence, using Kinesis Data Streams is more suitable for this requirement.
As DynamoDB does not keep the order, I think BE is the correct answer here.
upvoted 14 times
But it is:
"Which actions should a solutions architect take to meet this requirement? "
For this reason I chose AE, because we don't need both Kinesis AND SQS for this solution. Both choices complement to order processing:
order stored in DB, work item goes to the queue.
upvoted 3 times
A company is building a game system that needs to send unique events to separate leaderboard, matchmaking, and authentication services
concurrently. The company needs an AWS event-driven system that guarantees the order of the events.
Correct Answer: B
Amazon SNS FIFO topics ensure that messages are processed in the order in which they are received. This makes them an ideal choice for
situations where the order of events is important. Additionally, Amazon SNS allows messages to be sent to multiple endpoints, which
meets the requirement of sending events to separate services concurrently.
Amazon EventBridge event bus can also be used for sending events, but it does not guarantee the order of events.
Amazon Simple Notification Service (Amazon SNS) standard topics do not guarantee the order of messages.
Amazon Simple Queue Service (Amazon SQS) FIFO queues ensure that messages are processed in the order in which they are received,
but they are designed for message queuing, not publishing.
upvoted 7 times
Amazon SNS FIFO topics offer message ordering but do not support concurrent delivery to multiple subscribers, so this option is also
not a suitable choice.
Amazon SQS FIFO queues provide both ordering guarantees and support concurrent delivery to multiple subscribers. However, the use
of a queue adds additional latency, and the ordering guarantee may not be required in this scenario.
The best option for this use case is Amazon EventBridge event bus. It allows multiple targets to subscribe to an event bus and receive
the same event simultaneously, meeting the requirement of concurrent delivery to multiple subscribers. Additionally, EventBridge
provides ordering guarantees within an event bus, ensuring that events are processed in the order they are received.
upvoted 1 times
Option A, Amazon EventBridge event bus, is a serverless event bus service that makes it easy to build event-driven applications. While it
supports ordering of events, it does not provide guarantees on the order of delivery.
upvoted 3 times
A hospital is designing a new application that gathers symptoms from patients. The hospital has decided to use Amazon Simple Queue Service
(Amazon SQS) and Amazon Simple Notification Service (Amazon SNS) in the architecture.
A solutions architect is reviewing the infrastructure design. Data must be encrypted at rest and in transit. Only authorized personnel of the
hospital should be able to access the data.
Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)
A. Turn on server-side encryption on the SQS components. Update the default key policy to restrict key usage to a set of authorized principals.
B. Turn on server-side encryption on the SNS components by using an AWS Key Management Service (AWS KMS) customer managed key.
Apply a key policy to restrict key usage to a set of authorized principals.
C. Turn on encryption on the SNS components. Update the default key policy to restrict key usage to a set of authorized principals. Set a
condition in the topic policy to allow only encrypted connections over TLS.
D. Turn on server-side encryption on the SQS components by using an AWS Key Management Service (AWS KMS) customer managed key.
Apply a key policy to restrict key usage to a set of authorized principals. Set a condition in the queue policy to allow only encrypted
connections over TLS.
E. Turn on server-side encryption on the SQS components by using an AWS Key Management Service (AWS KMS) customer managed key.
Apply an IAM policy to restrict key usage to a set of authorized principals. Set a condition in the queue policy to allow only encrypted
connections over TLS.
Correct Answer: CD
Important
All requests to topics with SSE enabled must use HTTPS and Signature Version 4.
For information about compatibility of other services with encrypted topics, see your service documentation.
Amazon SNS only supports symmetric encryption KMS keys. You cannot use any other type of KMS key to encrypt your service
resources. For help determining whether a KMS key is a symmetric encryption key, see Identifying asymmetric KMS keys.
upvoted 1 times
reference: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/security_iam_service-with-iam.html
that excludes E
upvoted 1 times
imvb88 5 months, 2 weeks ago
Selected Answer: CD
Encryption at transit = use SSL/TLS -> rule out A,B
Encryption at rest = encryption on components -> keep C, D, E
KMS always need a key policy, IAM is optional -> E out
-> C, D left, one for SNS, one for SQS. TLS: checked, encryption on components: checked
upvoted 3 times
You can protect data in transit using Secure Sockets Layer (SSL) or client-side encryption. You can protect data at rest by requesting
Amazon SQS to encrypt your messages before saving them to disk in its data centers and then decrypt them when the messages are
received.
https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html
A key policy is a resource policy for an AWS KMS key. Key policies are the primary way to control access to KMS keys. Every KMS key
must have exactly one key policy. The statements in the key policy determine who has permission to use the KMS key and how they can
use it. You can also use IAM policies and grants to control access to the KMS key, but every KMS key must have a key policy.
upvoted 1 times
E: To restrict access to the data and allow only authorized personnel to access the data, we can apply an IAM policy to restrict key usage to
a set of authorized principals. We can also set a condition in the queue policy to allow only encrypted connections over TLS to encrypt data
in transit.
upvoted 2 times
A company runs a web application that is backed by Amazon RDS. A new database administrator caused data loss by accidentally editing
information in a database table. To help recover from this type of incident, the company wants the ability to restore the database to its state from
5 minutes before any change within the last 30 days.
Which feature should the solutions architect include in the design to meet this requirement?
A. Read replicas
B. Manual snapshots
C. Automated backups
D. Multi-AZ deployments
Correct Answer: C
https://aws.amazon.com/rds/features/backup/
upvoted 2 times
A company’s web application consists of an Amazon API Gateway API in front of an AWS Lambda function and an Amazon DynamoDB database.
The Lambda function handles the business logic, and the DynamoDB table hosts the data. The application uses Amazon Cognito user pools to
identify the individual users of the application. A solutions architect needs to update the application so that only users who have a subscription
can access premium content.
Which solution will meet this requirement with the LEAST operational overhead?
B. Set up AWS WAF on the API Gateway API. Create a rule to filter users who have a subscription.
C. Apply fine-grained IAM permissions to the premium content in the DynamoDB table.
D. Implement API usage plans and API keys to limit the access of users who do not have a subscription.
Correct Answer: C
A company is using Amazon Route 53 latency-based routing to route requests to its UDP-based application for users around the world. The
application is hosted on redundant servers in the company's on-premises data centers in the United States, Asia, and Europe. The company’s
compliance requirements state that the application must be hosted on premises. The company wants to improve the performance and availability
of the application.
A. Configure three Network Load Balancers (NLBs) in the three AWS Regions to address the on-premises endpoints. Create an accelerator by
using AWS Global Accelerator, and register the NLBs as its endpoints. Provide access to the application by using a CNAME that points to the
accelerator DNS.
B. Configure three Application Load Balancers (ALBs) in the three AWS Regions to address the on-premises endpoints. Create an accelerator
by using AWS Global Accelerator, and register the ALBs as its endpoints. Provide access to the application by using a CNAME that points to
the accelerator DNS.
C. Configure three Network Load Balancers (NLBs) in the three AWS Regions to address the on-premises endpoints. In Route 53, create a
latency-based record that points to the three NLBs, and use it as an origin for an Amazon CloudFront distribution. Provide access to the
application by using a CNAME that points to the CloudFront DNS.
D. Configure three Application Load Balancers (ALBs) in the three AWS Regions to address the on-premises endpoints. In Route 53, create a
latency-based record that points to the three ALBs, and use it as an origin for an Amazon CloudFront distribution. Provide access to the
application by using a CNAME that points to the CloudFront DNS.
Correct Answer: A
"A custom origin is an HTTP server, for example, a web server. The HTTP server can be an Amazon EC2 instance or an HTTP server that
you host somewhere else. "
upvoted 1 times
A solutions architect wants all new users to have specific complexity requirements and mandatory rotation periods for IAM user passwords.
B. Set a password policy for each IAM user in the AWS account.
D. Attach an Amazon CloudWatch rule to the Create_newuser event to set the password with the appropriate requirements.
Correct Answer: A
A company has migrated an application to Amazon EC2 Linux instances. One of these EC2 instances runs several 1-hour tasks on a schedule.
These tasks were written by different teams and have no common programming language. The company is concerned about performance and
scalability while these tasks run on a single instance. A solutions architect needs to implement a solution to resolve these concerns.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS Batch to run the tasks as jobs. Schedule the jobs by using Amazon EventBridge (Amazon CloudWatch Events).
B. Convert the EC2 instance to a container. Use AWS App Runner to create the container on demand to run the tasks as jobs.
C. Copy the tasks into AWS Lambda functions. Schedule the Lambda functions by using Amazon EventBridge (Amazon CloudWatch Events).
D. Create an Amazon Machine Image (AMI) of the EC2 instance that runs the tasks. Create an Auto Scaling group with the AMI to run multiple
copies of the instance.
Correct Answer: A
A company runs a public three-tier web application in a VPC. The application runs on Amazon EC2 instances across multiple Availability Zones.
The EC2 instances that run in private subnets need to communicate with a license server over the internet. The company needs a managed
solution that minimizes operational maintenance.
A. Provision a NAT instance in a public subnet. Modify each private subnet's route table with a default route that points to the NAT instance.
B. Provision a NAT instance in a private subnet. Modify each private subnet's route table with a default route that points to the NAT instance.
C. Provision a NAT gateway in a public subnet. Modify each private subnet's route table with a default route that points to the NAT gateway.
D. Provision a NAT gateway in a private subnet. Modify each private subnet's route table with a default route that points to the NAT gateway.
Correct Answer: C
As the company needs a managed solution that minimizes operational maintenance - NAT Gateway is a public subnet is the answer.
upvoted 5 times
NAT gateway provides automatic scaling, high availability, and fully managed service without admin overhead.
Placing the NAT gateway in a public subnet with proper routes allows private instances to use it for internet access.
Minimal operational maintenance compared to NAT instances.
upvoted 1 times
Placing a NAT gateway in a private subnet (D) would not allow internet access.
upvoted 1 times
A company needs to create an Amazon Elastic Kubernetes Service (Amazon EKS) cluster to host a digital media streaming application. The EKS
cluster will use a managed node group that is backed by Amazon Elastic Block Store (Amazon EBS) volumes for storage. The company must
encrypt all data at rest by using a customer managed key that is stored in AWS Key Management Service (AWS KMS).
Which combination of actions will meet this requirement with the LEAST operational overhead? (Choose two.)
A. Use a Kubernetes plugin that uses the customer managed key to perform data encryption.
B. After creation of the EKS cluster, locate the EBS volumes. Enable encryption by using the customer managed key.
C. Enable EBS encryption by default in the AWS Region where the EKS cluster will be created. Select the customer managed key as the default
key.
D. Create the EKS cluster. Create an IAM role that has a policy that grants permission to the customer managed key. Associate the role with
the EKS cluster.
E. Store the customer managed key as a Kubernetes secret in the EKS cluster. Use the customer managed key to encrypt the EBS volumes.
Correct Answer: AE
C) Setting the KMS key as the regional EBS encryption default automatically encrypts new EKS node EBS volumes.
D) The IAM role grants the EKS nodes access to use the key for encryption/decryption operations.
upvoted 1 times
D - Provides key access permission just to the EKS cluster without changing broader IAM permissions
upvoted 1 times
Among B,C,D: B and C are functionally similar > choice must be between B or C, D is fixed
Between B and C: C is out since it set default for all EBS volume in the region, which is more than required and even wrong, say what if
other EBS volumes of other applications in the region have different requirement?
upvoted 4 times
D. Create the EKS cluster. Create an IAM role that has a policy that grants permission to the customer managed key. Associate the role with
the EKS cluster.
Explanation:
Option B is the simplest and most direct way to enable encryption for the EBS volumes associated with the EKS cluster. After the EKS
cluster is created, you can manually locate the EBS volumes and enable encryption using the customer managed key through the AWS
Management Console, AWS CLI, or SDKs.
Option D involves creating an IAM role with a policy that grants permission to the customer managed key, and then associating that role
with the EKS cluster. This allows the EKS cluster to have the necessary permissions to access the customer managed key for encrypting
and decrypting data on the EBS volumes. This approach is more automated and can be easily managed through IAM, which provides
centralized control and reduces operational overhead.
upvoted 1 times
Option D is incorrect because it suggests creating an IAM role and associating it with the EKS cluster, which is not necessary for this
scenario.
Option E is incorrect because it suggests storing the customer managed key as a Kubernetes secret, which is not the best practice for
managing sensitive data such as encryption keys.
upvoted 1 times
Then your EKS cluster would not be able to access encrypted EBS volumes.
upvoted 1 times
Options C affects all EBS volumes in the region which is absolutely not necessary here.
upvoted 4 times
A company wants to migrate an Oracle database to AWS. The database consists of a single table that contains millions of geographic information
systems (GIS) images that are high resolution and are identified by a geographic code.
When a natural disaster occurs, tens of thousands of images get updated every few minutes. Each geographic code has a single image or row that
is associated with it. The company wants a solution that is highly available and scalable during such events.
A. Store the images and geographic codes in a database table. Use Oracle running on an Amazon RDS Multi-AZ DB instance.
B. Store the images in Amazon S3 buckets. Use Amazon DynamoDB with the geographic code as the key and the image S3 URL as the value.
C. Store the images and geographic codes in an Amazon DynamoDB table. Configure DynamoDB Accelerator (DAX) during times of high load.
D. Store the images in Amazon S3 buckets. Store geographic codes and image S3 URLs in a database table. Use Oracle running on an Amazon
RDS Multi-AZ DB instance.
Correct Answer: B
A company has an application that collects data from IoT sensors on automobiles. The data is streamed and stored in Amazon S3 through
Amazon Kinesis Data Firehose. The data produces trillions of S3 objects each year. Each morning, the company uses the data from the previous
30 days to retrain a suite of machine learning (ML) models.
Four times each year, the company uses the data from the previous 12 months to perform analysis and train other ML models. The data must be
available with minimal delay for up to 1 year. After 1 year, the data must be retained for archival purposes.
A. Use the S3 Intelligent-Tiering storage class. Create an S3 Lifecycle policy to transition objects to S3 Glacier Deep Archive after 1 year.
B. Use the S3 Intelligent-Tiering storage class. Configure S3 Intelligent-Tiering to automatically move objects to S3 Glacier Deep Archive after
1 year.
C. Use the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Create an S3 Lifecycle policy to transition objects to S3 Glacier
Deep Archive after 1 year.
D. Use the S3 Standard storage class. Create an S3 Lifecycle policy to transition objects to S3 Standard-Infrequent Access (S3 Standard-IA)
after 30 days, and then to S3 Glacier Deep Archive after 1 year.
Correct Answer: D
A company is running several business applications in three separate VPCs within the us-east-1 Region. The applications must be able to
communicate between VPCs. The applications also must be able to consistently send hundreds of gigabytes of data each day to a latency-
sensitive application that runs in a single on-premises data center.
A solutions architect needs to design a network connectivity solution that maximizes cost-effectiveness.
A. Configure three AWS Site-to-Site VPN connections from the data center to AWS. Establish connectivity by configuring one VPN connection
for each VPC.
B. Launch a third-party virtual network appliance in each VPC. Establish an IPsec VPN tunnel between the data center and each virtual
appliance.
C. Set up three AWS Direct Connect connections from the data center to a Direct Connect gateway in us-east-1. Establish connectivity by
configuring each VPC to use one of the Direct Connect connections.
D. Set up one AWS Direct Connect connection from the data center to AWS. Create a transit gateway, and attach each VPC to the transit
gateway. Establish connectivity between the Direct Connect connection and the transit gateway.
Correct Answer: D
An ecommerce company is building a distributed application that involves several serverless functions and AWS services to complete order-
processing tasks. These tasks require manual approvals as part of the workflow. A solutions architect needs to design an architecture for the
order-processing application. The solution must be able to combine multiple AWS Lambda functions into responsive serverless applications. The
solution also must orchestrate data and services that run on Amazon EC2 instances, containers, or on-premises servers.
Which solution will meet these requirements with the LEAST operational overhead?
C. Use Amazon Simple Queue Service (Amazon SQS) to build the application.
D. Use AWS Lambda functions and Amazon EventBridge events to build the application.
Correct Answer: B
Using Step Functions provides a fully managed orchestration service with minimal operational overhead.
upvoted 3 times
Reference: https://aws.amazon.com/step-
functions/#:~:text=AWS%20Step%20Functions%20is%20a,machine%20learning%20(ML)%20pipelines.
upvoted 2 times
A company has launched an Amazon RDS for MySQL DB instance. Most of the connections to the database come from serverless applications.
Application traffic to the database changes significantly at random intervals. At times of high demand, users report that their applications
experience database connection rejection errors.
Which solution will resolve this issue with the LEAST operational overhead?
A. Create a proxy in RDS Proxy. Configure the users’ applications to use the DB instance through RDS Proxy.
B. Deploy Amazon ElastiCache for Memcached between the users’ applications and the DB instance.
C. Migrate the DB instance to a different instance class that has higher I/O capacity. Configure the users’ applications to use the new DB
instance.
D. Configure Multi-AZ for the DB instance. Configure the users’ applications to switch between the DB instances.
Correct Answer: A
Using RDS Proxy requires minimal operational overhead - just create the proxy and reconfigure applications to use it. No code changes
needed.
upvoted 2 times
A company recently deployed a new auditing system to centralize information about operating system versions, patching, and installed software
for Amazon EC2 instances. A solutions architect must ensure all instances provisioned through EC2 Auto Scaling groups successfully send
reports to the auditing system as soon as they are launched and terminated.
A. Use a scheduled AWS Lambda function and run a script remotely on all EC2 instances to send data to the audit system.
B. Use EC2 Auto Scaling lifecycle hooks to run a custom script to send data to the audit system when instances are launched and terminated.
C. Use an EC2 Auto Scaling launch configuration to run a custom script through user data to send data to the audit system when instances are
launched and terminated.
D. Run a custom script on the instance operating system to send data to the audit system. Configure the script to be invoked by the EC2 Auto
Scaling group when the instance starts and is terminated.
Correct Answer: B
A company is developing a real-time multiplayer game that uses UDP for communications between the client and servers in an Auto Scaling group.
Spikes in demand are anticipated during the day, so the game server platform must adapt accordingly. Developers want to store gamer scores and
other non-relational data in a database solution that will scale without intervention.
A. Use Amazon Route 53 for traffic distribution and Amazon Aurora Serverless for data storage.
B. Use a Network Load Balancer for traffic distribution and Amazon DynamoDB on-demand for data storage.
C. Use a Network Load Balancer for traffic distribution and Amazon Aurora Global Database for data storage.
D. Use an Application Load Balancer for traffic distribution and Amazon DynamoDB global tables for data storage.
Correct Answer: B
Network Load Balancer efficiently distributes UDP gaming traffic to the Auto Scaling group of game servers.
DynamoDB On-Demand mode provides auto-scaling non-relational data storage for gamer scores and other game data. DynamoDB is
optimized for fast, high-scale access patterns seen in gaming.
Together, the Network Load Balancer and DynamoDB On-Demand provide an architecture that can smoothly scale up and down to match
spikes in gaming demand.
upvoted 2 times
https://www.examtopics.com/discussions/amazon/view/29756-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
A company hosts a frontend application that uses an Amazon API Gateway API backend that is integrated with AWS Lambda. When the API
receives requests, the Lambda function loads many libraries. Then the Lambda function connects to an Amazon RDS database, processes the
data, and returns the data to the frontend application. The company wants to ensure that response latency is as low as possible for all its users
with the fewest number of changes to the company's operations.
A. Establish a connection between the frontend application and the database to make queries faster by bypassing the API.
B. Configure provisioned concurrency for the Lambda function that handles the requests.
C. Cache the results of the queries in Amazon S3 for faster retrieval of similar datasets.
D. Increase the size of the database to increase the number of connections Lambda can establish at one time.
Correct Answer: C
Configuring provisioned concurrency would get rid of the "cold start" of the function therefore speeding up the proccess.
upvoted 10 times
A company is migrating its on-premises workload to the AWS Cloud. The company already uses several Amazon EC2 instances and Amazon RDS
DB instances. The company wants a solution that automatically starts and stops the EC2 instances and DB instances outside of business hours.
The solution must minimize cost and infrastructure maintenance.
A. Scale the EC2 instances by using elastic resize. Scale the DB instances to zero outside of business hours.
B. Explore AWS Marketplace for partner solutions that will automatically start and stop the EC2 instances and DB instances on a schedule.
C. Launch another EC2 instance. Configure a crontab schedule to run shell scripts that will start and stop the existing EC2 instances and DB
instances on a schedule.
D. Create an AWS Lambda function that will start and stop the EC2 instances and DB instances. Configure Amazon EventBridge to invoke the
Lambda function on a schedule.
Correct Answer: A
Option A, scaling EC2 instances by using elastic resize and scaling DB instances to zero outside of business hours, is not feasible as DB
instances cannot be scaled to zero.
Option B, exploring AWS Marketplace for partner solutions, may be an option, but it may not be the most efficient solution and could
potentially add additional costs.
Option C, launching another EC2 instance and configuring a crontab schedule to run shell scripts that will start and stop the existing EC2
instances and DB instances on a schedule, adds unnecessary infrastructure and maintenance.
upvoted 10 times
A company hosts a three-tier web application that includes a PostgreSQL database. The database stores the metadata from documents. The
company searches the metadata for key terms to retrieve documents that the company reviews in a report each month. The documents are stored
in Amazon S3. The documents are usually written only once, but they are updated frequently.
The reporting process takes a few hours with the use of relational queries. The reporting process must not prevent any document modifications or
the addition of new documents. A solutions architect needs to implement a solution to speed up the reporting process.
Which solution will meet these requirements with the LEAST amount of change to the application code?
A. Set up a new Amazon DocumentDB (with MongoDB compatibility) cluster that includes a read replica. Scale the read replica to generate the
reports.
B. Set up a new Amazon Aurora PostgreSQL DB cluster that includes an Aurora Replica. Issue queries to the Aurora Replica to generate the
reports.
C. Set up a new Amazon RDS for PostgreSQL Multi-AZ DB instance. Configure the reporting module to query the secondary RDS node so that
the reporting module does not affect the primary node.
D. Set up a new Amazon DynamoDB table to store the documents. Use a fixed write capacity to support new document entries. Automatically
scale the read capacity to support the reports.
Correct Answer: D
Aurora PostgreSQL provides native PostgreSQL compatibility, so minimal code changes would be required.
Using an Aurora Replica separates the reporting workload from the main workload, preventing any slowdown of document
updates/inserts.
Aurora can auto-scale read replicas to handle the reporting load.
This allows leveraging the existing PostgreSQL database without major changes. DynamoDB would require more significant rewrite of
data access code.
RDS Multi-AZ alone would not fully separate the workloads, as the secondary is for HA/failover more than scaling read workloads.
upvoted 1 times
Aurora is a relational database, it supports PostgreSQL and with the help of read replicas we can issue the reporting proccess that take
several hours to the replica, therefore not affecting the primary node which can handle new writes or document modifications.
upvoted 1 times
A company has a three-tier application on AWS that ingests sensor data from its users’ devices. The traffic flows through a Network Load Balancer
(NLB), then to Amazon EC2 instances for the web tier, and finally to EC2 instances for the application tier. The application tier makes calls to a
database.
What should a solutions architect do to improve the security of the data in transit?
C. Change the load balancer to an Application Load Balancer (ALB). Enable AWS WAF on the ALB.
D. Encrypt the Amazon Elastic Block Store (Amazon EBS) volume on the EC2 instances by using AWS Key Management Service (AWS KMS).
Correct Answer: A
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-tls-listener.html
https://exampleloadbalancer.com/nlbtls_demo.html
upvoted 12 times
You can also change the load balancer to an Application Load Balancer (ALB) and enable AWS WAF on it. AWS WAF is a web application
firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security,
or consume excessive resources3.
the A and C correct without transit but the need to improve the security of the data in transit? so he need SSL/TLS certificates
upvoted 1 times
A company is planning to migrate a commercial off-the-shelf application from its on-premises data center to AWS. The software has a software
licensing model using sockets and cores with predictable capacity and uptime requirements. The company wants to use its existing licenses,
which were purchased earlier this year.
Correct Answer: A
Dedicated Reserved Instances (DRIs) are the most cost-effective option for workloads that have predictable capacity and uptime
requirements. DRIs offer a significant discount over On-Demand Instances, and they can be used to lock in a price for a period of time.
In this case, the company has predictable capacity and uptime requirements because the software has a software licensing model using
sockets and cores. The company also wants to use its existing licenses, which were purchased earlier this year. Therefore, DRIs are the
most cost-effective option.
upvoted 2 times
I would go with "A" only if the question would clearly state that the COTS application has some strong dependency on physiscal hardware.
upvoted 1 times
Amazon EC2 Dedicated Hosts allow you to use your eligible software licenses from vendors such as Microsoft and Oracle on Amazon
EC2, so that you get the flexibility and cost effectiveness of using your own licenses, but with the resiliency, simplicity and elasticity of
AWS.
upvoted 1 times
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dedicated-hosts-overview.html
upvoted 3 times
A company runs an application on Amazon EC2 Linux instances across multiple Availability Zones. The application needs a storage layer that is
highly available and Portable Operating System Interface (POSIX)-compliant. The storage layer must provide maximum data durability and must be
shareable across the EC2 instances. The data in the storage layer will be accessed frequently for the first 30 days and will be accessed
infrequently after that time.
A. Use the Amazon S3 Standard storage class. Create an S3 Lifecycle policy to move infrequently accessed data to S3 Glacier.
B. Use the Amazon S3 Standard storage class. Create an S3 Lifecycle policy to move infrequently accessed data to S3 Standard-Infrequent
Access (S3 Standard-IA).
C. Use the Amazon Elastic File System (Amazon EFS) Standard storage class. Create a lifecycle management policy to move infrequently
accessed data to EFS Standard-Infrequent Access (EFS Standard-IA).
D. Use the Amazon Elastic File System (Amazon EFS) One Zone storage class. Create a lifecycle management policy to move infrequently
accessed data to EFS One Zone-Infrequent Access (EFS One Zone-IA).
Correct Answer: B
https://aws.amazon.com/efs/features/infrequent-access/
upvoted 1 times
A solutions architect is creating a new VPC design. There are two public subnets for the load balancer, two private subnets for web servers, and
two private subnets for MySQL. The web servers use only HTTPS. The solutions architect has already created a security group for the load
balancer allowing port 443 from 0.0.0.0/0. Company policy requires that each resource has the least access required to still be able to perform its
tasks.
Which additional configuration strategy should the solutions architect use to meet these requirements?
A. Create a security group for the web servers and allow port 443 from 0.0.0.0/0. Create a security group for the MySQL servers and allow port
3306 from the web servers security group.
B. Create a network ACL for the web servers and allow port 443 from 0.0.0.0/0. Create a network ACL for the MySQL servers and allow port
3306 from the web servers security group.
C. Create a security group for the web servers and allow port 443 from the load balancer. Create a security group for the MySQL servers and
allow port 3306 from the web servers security group.
D. Create a network ACL for the web servers and allow port 443 from the load balancer. Create a network ACL for the MySQL servers and allow
port 3306 from the web servers security group.
Correct Answer: C
This option follows the principle of least privilege by only allowing necessary access:
Web server SG allows port 443 from load balancer SG (not open to world)
MySQL SG allows port 3306 only from web server SG
upvoted 2 times
An ecommerce company is running a multi-tier application on AWS. The front-end and backend tiers both run on Amazon EC2, and the database
runs on Amazon RDS for MySQL. The backend tier communicates with the RDS instance. There are frequent calls to return identical datasets from
the database that are causing performance slowdowns.
D. Implement Amazon Kinesis Data Firehose to stream the calls to the database.
Correct Answer: B
The key issue is repeated calls to return identical datasets from the RDS database causing performance slowdowns.
Implementing Amazon ElastiCache for Redis or Memcached would allow these repeated query results to be cached, improving backend
performance by reducing load on the database.
upvoted 2 times
The key issue is repeated calls to return identical datasets from the RDS database causing performance slowdowns.
Implementing Amazon ElastiCache for Redis or Memcached would allow these repeated query results to be cached, improving backend
performance by reducing load on the database.
upvoted 1 times
A new employee has joined a company as a deployment engineer. The deployment engineer will be using AWS CloudFormation templates to create
multiple AWS resources. A solutions architect wants the deployment engineer to perform job activities while following the principle of least
privilege.
Which combination of actions should the solutions architect take to accomplish this goal? (Choose two.)
A. Have the deployment engineer use AWS account root user credentials for performing AWS CloudFormation stack operations.
B. Create a new IAM user for the deployment engineer and add the IAM user to a group that has the PowerUsers IAM policy attached.
C. Create a new IAM user for the deployment engineer and add the IAM user to a group that has the AdministratorAccess IAM policy attached.
D. Create a new IAM user for the deployment engineer and add the IAM user to a group that has an IAM policy that allows AWS
CloudFormation actions only.
E. Create an IAM role for the deployment engineer to explicitly define the permissions specific to the AWS CloudFormation stack and launch
stacks using that IAM role.
Correct Answer: DE
D) Create a new IAM user for the deployment engineer and add the IAM user to a group that has an IAM policy that allows AWS
CloudFormation actions only.
E) Create an IAM role for the deployment engineer to explicitly define the permissions specific to the AWS CloudFormation stack and
launch stacks using that IAM role.
The principle of least privilege states that users should only be given the minimal permissions necessary to perform their job function.
upvoted 1 times
Option E, creating an IAM role with specific permissions for AWS CloudFormation stack operations and allowing the deployment engineer
to assume that role, is another valid approach. By using an IAM role, the deployment engineer can assume the role when necessary,
granting them temporary permissions to perform CloudFormation actions. This provides a level of separation and limits the permissions
granted to the engineer to only the required CloudFormation operations.
upvoted 1 times
A company is deploying a two-tier web application in a VPC. The web tier is using an Amazon EC2 Auto Scaling group with public subnets that
span multiple Availability Zones. The database tier consists of an Amazon RDS for MySQL DB instance in separate private subnets. The web tier
requires access to the database to retrieve product information.
The web application is not working as intended. The web application reports that it cannot connect to the database. The database is confirmed to
be up and running. All configurations for the network ACLs, security groups, and route tables are still in their default states.
A. Add an explicit rule to the private subnet’s network ACL to allow traffic from the web tier’s EC2 instances.
B. Add a route in the VPC route table to allow traffic between the web tier’s EC2 instances and the database tier.
C. Deploy the web tier's EC2 instances and the database tier’s RDS instance into two separate VPCs, and configure VPC peering.
D. Add an inbound rule to the security group of the database tier’s RDS instance to allow traffic from the web tiers security group.
Correct Answer: D
A company has a large dataset for its online advertising business stored in an Amazon RDS for MySQL DB instance in a single Availability Zone.
The company wants business reporting queries to run without impacting the write operations to the production DB instance.
B. Scale out the DB instance horizontally by placing it behind an Elastic Load Balancer.
C. Scale up the DB instance to a larger instance type to handle write operations and queries.
D. Deploy the DB instance in multiple Availability Zones to process the business reporting queries.
Correct Answer: D
RDS read replicas allow read-only copies of the production DB instance to be created
Queries to the read replica don't affect the source DB instance performance
This isolates reporting queries from production traffic and write operations
So using RDS read replicas is the best way to meet the requirements of running reporting queries without impacting production write
operations.
upvoted 1 times
A company hosts a three-tier ecommerce application on a fleet of Amazon EC2 instances. The instances run in an Auto Scaling group behind an
Application Load Balancer (ALB). All ecommerce data is stored in an Amazon RDS for MariaDB Multi-AZ DB instance.
The company wants to optimize customer session management during transactions. The application must store session data durably.
D. Deploy an Amazon ElastiCache for Redis cluster to store customer session information.
E. Use AWS Systems Manager Application Manager in the application to manage user session information.
Correct Answer: BD
https://aws.amazon.com/caching/session-management/
upvoted 17 times
https://aws.amazon.com/caching/session-management/
upvoted 2 times
A company needs a backup strategy for its three-tier stateless web application. The web application runs on Amazon EC2 instances in an Auto
Scaling group with a dynamic scaling policy that is configured to respond to scaling events. The database tier runs on Amazon RDS for
PostgreSQL. The web application does not require temporary local storage on the EC2 instances. The company’s recovery point objective (RPO) is
2 hours.
The backup strategy must maximize scalability and optimize resource utilization for this environment.
A. Take snapshots of Amazon Elastic Block Store (Amazon EBS) volumes of the EC2 instances and database every 2 hours to meet the RPO.
B. Configure a snapshot lifecycle policy to take Amazon Elastic Block Store (Amazon EBS) snapshots. Enable automated backups in Amazon
RDS to meet the RPO.
C. Retain the latest Amazon Machine Images (AMIs) of the web and application tiers. Enable automated backups in Amazon RDS and use
point-in-time recovery to meet the RPO.
D. Take snapshots of Amazon Elastic Block Store (Amazon EBS) volumes of the EC2 instances every 2 hours. Enable automated backups in
Amazon RDS and use point-in-time recovery to meet the RPO.
Correct Answer: D
Snapshots of EBS volumes would be necessary if you want to back up the entire EC2 instance, including any applications and temporary
data stored on the EBS volumes attached to the instances. When you take a snapshot of an EBS volume, it backs up the entire contents of
that volume. This ensures that you can restore the entire EC2 instance to a specific point in time more quickly. However, if there is no
temporary data stored on the EBS volumes, then snapshots of EBS volumes are not necessary.
upvoted 19 times
This uses native, automated AWS backup features that require minimal ongoing management:
- AMI automated backups provide point-in-time recovery for the stateless app tier.
- RDS automated backups provide point-in-time recovery for the database.
upvoted 2 times
neosis91 5 months, 2 weeks ago
Selected Answer: B
BBBBBBBBBB
upvoted 1 times
With this solution, a snapshot lifecycle policy can be created to take Amazon Elastic Block Store (Amazon EBS) snapshots periodically,
which will ensure that EC2 instances can be restored in the event of an outage. Additionally, automated backups can be enabled in
Amazon RDS for PostgreSQL to take frequent backups of the database tier. This will help to minimize the RPO to 2 hours.
Taking snapshots of Amazon EBS volumes of the EC2 instances and database every 2 hours (Option A) may not be cost-effective and
efficient, as this approach would require taking regular backups of all the instances and volumes, regardless of whether any changes have
occurred or not. Retaining the latest Amazon Machine Images (AMIs) of the web and application tiers (Option C) would provide only an
image backup and not a data backup, which is required for the database tier. Taking snapshots of Amazon EBS volumes of the EC2
instances every 2 hours and enabling automated backups in Amazon RDS and using point-in-time recovery (Option D) would result in
higher costs and may not be necessary to meet the RPO requirement of 2 hours.
upvoted 4 times
The best solution is to configure a snapshot lifecycle policy to take Amazon Elastic Block Store (Amazon EBS) snapshots, and enable
automated backups in Amazon RDS to meet the RPO. An RPO of 2 hours means that the company needs to ensure that the backup is
taken every 2 hours to minimize data loss in case of a disaster. Using a snapshot lifecycle policy to take Amazon EBS snapshots will ensure
that the web and application tier can be restored quickly and efficiently in case of a disaster. Additionally, enabling automated backups in
Amazon RDS will ensure that the database tier can be restored quickly and efficiently in case of a disaster. This solution maximizes
scalability and optimizes resource utilization because it uses automated backup solutions built into AWS.
upvoted 3 times
Question #392 Topic 1
A company wants to deploy a new public web application on AWS. The application includes a web server tier that uses Amazon EC2 instances.
The application also includes a database tier that uses an Amazon RDS for MySQL DB instance.
The application must be secure and accessible for global customers that have dynamic IP addresses.
How should a solutions architect configure the security groups to meet these requirements?
A. Configure the security group for the web servers to allow inbound traffic on port 443 from 0.0.0.0/0. Configure the security group for the DB
instance to allow inbound traffic on port 3306 from the security group of the web servers.
B. Configure the security group for the web servers to allow inbound traffic on port 443 from the IP addresses of the customers. Configure the
security group for the DB instance to allow inbound traffic on port 3306 from the security group of the web servers.
C. Configure the security group for the web servers to allow inbound traffic on port 443 from the IP addresses of the customers. Configure the
security group for the DB instance to allow inbound traffic on port 3306 from the IP addresses of the customers.
D. Configure the security group for the web servers to allow inbound traffic on port 443 from 0.0.0.0/0. Configure the security group for the DB
instance to allow inbound traffic on port 3306 from 0.0.0.0/0.
Correct Answer: A
A payment processing company records all voice communication with its customers and stores the audio files in an Amazon S3 bucket. The
company needs to capture the text from the audio files. The company must remove from the text any personally identifiable information (PII) that
belongs to customers.
A. Process the audio files by using Amazon Kinesis Video Streams. Use an AWS Lambda function to scan for known PII patterns.
B. When an audio file is uploaded to the S3 bucket, invoke an AWS Lambda function to start an Amazon Textract task to analyze the call
recordings.
C. Configure an Amazon Transcribe transcription job with PII redaction turned on. When an audio file is uploaded to the S3 bucket, invoke an
AWS Lambda function to start the transcription job. Store the output in a separate S3 bucket.
D. Create an Amazon Connect contact flow that ingests the audio files with transcription turned on. Embed an AWS Lambda function to scan
for known PII patterns. Use Amazon EventBridge to start the contact flow when an audio file is uploaded to the S3 bucket.
Correct Answer: C
A company is running a multi-tier ecommerce web application in the AWS Cloud. The application runs on Amazon EC2 instances with an Amazon
RDS for MySQL Multi-AZ DB instance. Amazon RDS is configured with the latest generation DB instance with 2,000 GB of storage in a General
Purpose SSD (gp3) Amazon Elastic Block Store (Amazon EBS) volume. The database performance affects the application during periods of high
demand.
A database administrator analyzes the logs in Amazon CloudWatch Logs and discovers that the application performance always degrades when
the number of read and write IOPS is higher than 20,000.
D. Replace the 2,000 GB gp3 volume with two 1,000 GB gp3 volumes.
Correct Answer: C
‘Amazon RDS provides three storage types: General Purpose SSD (also known as gp2 and gp3), Provisioned IOPS SSD (also known as
io1), and magnetic (also known as standard). They differ in performance characteristics and price, which means that you can tailor your
storage performance and cost to the needs of your database workload.’
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html
upvoted 1 times
16,000 IOPS
1,000 MiB/s of throughput
160-TiB volume size
upvoted 1 times
GalileoEC2 6 months ago
is this true? Amazon RDS (Relational Database Service) supports the Provisioned IOPS SSD (io2) storage type for its database instances.
The io2 storage type is designed to deliver predictable performance for critical and highly demanding database workloads. It provides
higher durability, higher IOPS, and lower latency compared to other Amazon EBS (Elastic Block Store) storage types. RDS offers the
option to choose between the General Purpose SSD (gp3) and Provisioned IOPS SSD (io2) storage types for database instances.
upvoted 1 times
Replacing the gp3 volume with two 1,000 GB gp3 volumes will allow the application to achieve the required IOPS and improve its
performance. This is because two 1,000 GB gp3 volumes can provide up to 40,000 IOPS, which is more than the 20,000 IOPS that the
application is demanding.
upvoted 1 times
Your analysis effectively rules out the other options (A, B, and D) and provides a clear justification for selecting option C. Well done!
upvoted 1 times
An IAM user made several configuration changes to AWS resources in their company's account during a production deployment last week. A
solutions architect learned that a couple of security group rules are not configured as desired. The solutions architect wants to confirm which IAM
user was responsible for making changes.
Which service should the solutions architect use to find the desired information?
A. Amazon GuardDuty
B. Amazon Inspector
C. AWS CloudTrail
D. AWS Config
Correct Answer: B
The best option is to use AWS CloudTrail to find the desired information. AWS CloudTrail is a service that enables governance, compliance,
operational auditing, and risk auditing of AWS account activities. CloudTrail can be used to log all changes made to resources in an AWS
account, including changes made by IAM users, EC2 instances, AWS management console, and other AWS services. By using CloudTrail,
the solutions architect can identify the IAM user who made the configuration changes to the security group rules.
upvoted 8 times
A company has implemented a self-managed DNS service on AWS. The solution consists of the following:
B. Subscribe to AWS Shield Advanced. Add the EC2 instances as resources to protect.
C. Create an AWS WAF web ACL that includes a rate-based rule. Associate the web ACL with the accelerator.
D. Create an AWS WAF web ACL that includes a rate-based rule. Associate the web ACL with the EC2 instances.
Correct Answer: A
B. Subscribe to AWS Shield Advanced. Add the EC2 instances as resources to protect.
A. While you can add the accelerator as a resource to protect with AWS Shield Advanced, it's generally more effective to protect the
individual resources (in this case, the EC2 instances) because AWS Shield Advanced will automatically protect resources associated with
Global Accelerator
upvoted 1 times
Sorry I meant A
upvoted 1 times
Question #397 Topic 1
An ecommerce company needs to run a scheduled daily job to aggregate and filter sales records for analytics. The company stores the sales
records in an Amazon S3 bucket. Each object can be up to 10 GB in size. Based on the number of sales events, the job can take up to an hour to
complete. The CPU and memory usage of the job are constant and are known in advance.
A solutions architect needs to minimize the amount of operational effort that is needed for the job to run.
A. Create an AWS Lambda function that has an Amazon EventBridge notification. Schedule the EventBridge event to run once a day.
B. Create an AWS Lambda function. Create an Amazon API Gateway HTTP API, and integrate the API with the function. Create an Amazon
EventBridge scheduled event that calls the API and invokes the function.
C. Create an Amazon Elastic Container Service (Amazon ECS) cluster with an AWS Fargate launch type. Create an Amazon EventBridge
scheduled event that launches an ECS task on the cluster to run the job.
D. Create an Amazon Elastic Container Service (Amazon ECS) cluster with an Amazon EC2 launch type and an Auto Scaling group with at least
one EC2 instance. Create an Amazon EventBridge scheduled event that launches an ECS task on the cluster to run the job.
Correct Answer: C
Between options C and D, option C is the better choice since it uses AWS Fargate which is a serverless compute engine for containers that
eliminates the need to manage the underlying EC2 instances, making it a low operational effort solution. Additionally, Fargate also
provides instant scale-up and scale-down capabilities to run the scheduled job as per the requirement.
C. Create an Amazon Elastic Container Service (Amazon ECS) cluster with an AWS Fargate launch type. Create an Amazon EventBridge
scheduled event that launches an ECS task on the cluster to run the job.
upvoted 15 times
Using Amazon CloudWatch metrics to monitor the Count metric and alerting the security team when the predefined rate is reached is not
a solution that can protect against HTTP flood attacks.
Creating an Amazon CloudFront distribution in front of the API Gateway Regional API endpoint with a maximum TTL of 24 hours is not a
solution that can protect against HTTP flood attacks.
upvoted 1 times
klayytech 6 months, 1 week ago
Selected Answer: C
The solution that meets these requirements is C. Create an Amazon Elastic Container Service (Amazon ECS) cluster with an AWS Fargate
launch type. Create an Amazon EventBridge scheduled event that launches an ECS task on the cluster to run the job. This solution will
minimize the amount of operational effort that is needed for the job to run.
A company needs to transfer 600 TB of data from its on-premises network-attached storage (NAS) system to the AWS Cloud. The data transfer
must be complete within 2 weeks. The data is sensitive and must be encrypted in transit. The company’s internet connection can support an
upload speed of 100 Mbps.
A. Use Amazon S3 multi-part upload functionality to transfer the files over HTTPS.
B. Create a VPN connection between the on-premises NAS system and the nearest AWS Region. Transfer the data over the VPN connection.
C. Use the AWS Snow Family console to order several AWS Snowball Edge Storage Optimized devices. Use the devices to transfer the data to
Amazon S3.
D. Set up a 10 Gbps AWS Direct Connect connection between the company location and the nearest AWS Region. Transfer the data over a VPN
connection into the Region to store the data in Amazon S3.
Correct Answer: B
The best option is to use the AWS Snow Family console to order several AWS Snowball Edge Storage Optimized devices and use the devices
to transfer the data to Amazon S3. Snowball Edge is a petabyte-scale data transfer device that can help transfer large amounts of data
securely and quickly. Using Snowball Edge can be the most cost-effective solution for transferring large amounts of data over long
distances and can help meet the requirement of transferring 600 TB of data within two weeks.
upvoted 3 times
Question #399 Topic 1
A financial company hosts a web application on AWS. The application uses an Amazon API Gateway Regional API endpoint to give users the
ability to retrieve current stock prices. The company’s security team has noticed an increase in the number of API requests. The security team is
concerned that HTTP flood attacks might take the application offline.
A solutions architect must design a solution to protect the application from this type of attack.
Which solution meets these requirements with the LEAST operational overhead?
A. Create an Amazon CloudFront distribution in front of the API Gateway Regional API endpoint with a maximum TTL of 24 hours.
B. Create a Regional AWS WAF web ACL with a rate-based rule. Associate the web ACL with the API Gateway stage.
C. Use Amazon CloudWatch metrics to monitor the Count metric and alert the security team when the predefined rate is reached.
D. Create an Amazon CloudFront distribution with Lambda@Edge in front of the API Gateway Regional API endpoint. Create an AWS Lambda
function to block requests from IP addresses that exceed the predefined rate.
Correct Answer: B
A meteorological startup company has a custom web application to sell weather data to its users online. The company uses Amazon DynamoDB
to store its data and wants to build a new service that sends an alert to the managers of four internal teams every time a new weather event is
recorded. The company does not want this new service to affect the performance of the current application.
What should a solutions architect do to meet these requirements with the LEAST amount of operational overhead?
A. Use DynamoDB transactions to write new event data to the table. Configure the transactions to notify internal teams.
B. Have the current application publish a message to four Amazon Simple Notification Service (Amazon SNS) topics. Have each team
subscribe to one topic.
C. Enable Amazon DynamoDB Streams on the table. Use triggers to write to a single Amazon Simple Notification Service (Amazon SNS) topic
to which the teams can subscribe.
D. Add a custom attribute to each record to flag new items. Write a cron job that scans the table every minute for items that are new and
notifies an Amazon Simple Queue Service (Amazon SQS) queue to which the teams can subscribe.
Correct Answer: C
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html
https://aws.amazon.com/blogs/database/dynamodb-streams-use-cases-and-design-patterns/
upvoted 2 times
Answer B is not the best solution because it requires changes to the current application, which may affect its performance, and it
creates additional work for the teams to subscribe to multiple topics.
Answer D is not a good solution because it requires a cron job to scan the table every minute, which adds additional operational
overhead to the system.
Therefore, the correct answer is C. Enable Amazon DynamoDB Streams on the table. Use triggers to write to a single Amazon SNS topic
to which the teams can subscribe.
upvoted 2 times
A company wants to use the AWS Cloud to make an existing application highly available and resilient. The current version of the application
resides in the company's data center. The application recently experienced data loss after a database server crashed because of an unexpected
power outage.
The company needs a solution that avoids any single points of failure. The solution must give the application the ability to scale to meet user
demand.
A. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones. Use an Amazon
RDS DB instance in a Multi-AZ configuration.
B. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group in a single Availability Zone. Deploy the database
on an EC2 instance. Enable EC2 Auto Recovery.
C. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones. Use an Amazon
RDS DB instance with a read replica in a single Availability Zone. Promote the read replica to replace the primary DB instance if the primary DB
instance fails.
D. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones. Deploy the
primary and secondary database servers on EC2 instances across multiple Availability Zones. Use Amazon Elastic Block Store (Amazon EBS)
Multi-Attach to create shared storage between the instances.
Correct Answer: A
To make an existing application highly available and resilient while avoiding any single points of failure and giving the application the
ability to scale to meet user demand, the best solution would be to deploy the application servers using Amazon EC2 instances in an Auto
Scaling group across multiple Availability Zones and use an Amazon RDS DB instance in a Multi-AZ configuration.
By using an Amazon RDS DB instance in a Multi-AZ configuration, the database is automatically replicated across multiple Availability
Zones, ensuring that the database is highly available and can withstand the failure of a single Availability Zone. This provides fault
tolerance and avoids any single points of failure.
upvoted 2 times
Thief 6 months, 1 week ago
Selected Answer: D
Why not D?
upvoted 1 times
A company needs to ingest and handle large amounts of streaming data that its application generates. The application runs on Amazon EC2
instances and sends data to Amazon Kinesis Data Streams, which is configured with default settings. Every other day, the application consumes
the data and writes the data to an Amazon S3 bucket for business intelligence (BI) processing. The company observes that Amazon S3 is not
receiving all the data that the application sends to Kinesis Data Streams.
A. Update the Kinesis Data Streams default settings by modifying the data retention period.
B. Update the application to use the Kinesis Producer Library (KPL) to send the data to Kinesis Data Streams.
C. Update the number of Kinesis shards to handle the throughput of the data that is sent to Kinesis Data Streams.
D. Turn on S3 Versioning within the S3 bucket to preserve every version of every object that is ingested in the S3 bucket.
Correct Answer: A
The best option is to update the number of Kinesis shards to handle the throughput of the data that is sent to Kinesis Data Streams.
Kinesis Data Streams scales horizontally by increasing or decreasing the number of shards, which controls the throughput capacity of the
stream. By increasing the number of shards, the application will be able to send more data to Kinesis Data Streams, which can help ensure
that S3 receives all the data.
upvoted 14 times
- Answer C updates the number of Kinesis shards to handle the throughput of the data that is sent to Kinesis Data Streams. By
increasing the number of shards, the data is distributed across multiple shards, which allows for increased throughput and ensures
that all data is ingested and processed by Kinesis Data Streams.
- Monitoring the Kinesis Data Streams and adjusting the number of shards as needed to handle changes in data throughput can ensure
that the application can handle large amounts of streaming data.
upvoted 2 times
Thanks.
upvoted 1 times
The question mentioned Kinesis data stream default settings and "every other day". After 24hrs, the data isn't in the Data stream if the
default settings is not modified to store data more than 24hrs.
upvoted 13 times
Ramdi1 Most Recent 6 days, 18 hours ago
Selected Answer: A
I have only voted A because it mentions the default setting in Kinesis, if it did not mention that then I would look to increase the Shards.
By default it is 24 hours and can go to 365 days. I think the question should be rephrased slightly. I had trouble deciding between A & C.
Also apparently the most voted answer is the correct answer as per some advice I was given.
upvoted 1 times
Will go with A
upvoted 1 times
Therefore, to handle the high volume of data that the application sends to Kinesis Data Streams, the number of Kinesis shards should be
increased to handle the required throughput
upvoted 2 times
A developer has an application that uses an AWS Lambda function to upload files to Amazon S3 and needs the required permissions to perform
the task. The developer already has an IAM user with valid IAM credentials required for Amazon S3.
A. Add required IAM permissions in the resource policy of the Lambda function.
B. Create a signed request using the existing IAM credentials in the Lambda function.
C. Create a new IAM user and use the existing IAM credentials in the Lambda function.
D. Create an IAM execution role with the required permissions and attach the IAM role to the Lambda function.
Correct Answer: A
Therefore, the correct answer is D. Create an IAM execution role with the required permissions and attach the IAM role to the Lambda
function.
upvoted 1 times
L'architecte de solutions doit créer un rôle d'exécution IAM ayant les autorisations nécessaires pour accéder à Amazon S3 et effectuer les
opérations requises (par exemple, charger des fichiers). Ensuite, le rôle doit être associé à la fonction Lambda, de sorte que la fonction
puisse assumer ce rôle et avoir les autorisations nécessaires pour interagir avec Amazon S3.
upvoted 2 times
A company has deployed a serverless application that invokes an AWS Lambda function when new documents are uploaded to an Amazon S3
bucket. The application uses the Lambda function to process the documents. After a recent marketing campaign, the company noticed that the
application did not process many of the documents.
B. Configure an S3 bucket replication policy. Stage the documents in the S3 bucket for later processing.
C. Deploy an additional Lambda function. Load balance the processing of the documents across the two Lambda functions.
D. Create an Amazon Simple Queue Service (Amazon SQS) queue. Send the requests to the queue. Configure the queue as an event source for
Lambda.
Correct Answer: D
This will ensure that the documents are not lost and can be processed at a later time if the Lambda function is not available. By using
Amazon SQS, the architecture is decoupled and the Lambda function can process the documents in a scalable and fault-tolerant manner.
upvoted 1 times
Cette solution permet de gérer efficacement les pics de charge et d'éviter la perte de documents en cas d'augmentation soudaine du
trafic. Lorsque de nouveaux documents sont chargés dans le compartiment Amazon S3, les demandes sont envoyées à la file d'attente
Amazon SQS, qui agit comme un tampon. La fonction Lambda est déclenchée en fonction des événements dans la file d'attente, ce qui
permet un traitement équilibré et évite que l'application ne soit submergée par un grand nombre de documents simultanés.
upvoted 1 times
A solutions architect is designing the architecture for a software demonstration environment. The environment will run on Amazon EC2 instances
in an Auto Scaling group behind an Application Load Balancer (ALB). The system will experience significant increases in traffic during working
hours but is not required to operate on weekends.
Which combination of actions should the solutions architect take to ensure that the system can scale to meet demand? (Choose two.)
A. Use AWS Auto Scaling to adjust the ALB capacity based on request rate.
B. Use AWS Auto Scaling to scale the capacity of the VPC internet gateway.
C. Launch the EC2 instances in multiple AWS Regions to distribute the load across Regions.
D. Use a target tracking scaling policy to scale the Auto Scaling group based on instance CPU utilization.
E. Use scheduled scaling to change the Auto Scaling group minimum, maximum, and desired capacity to zero for weekends. Revert to the
default values at the start of the week.
Correct Answer: D E
D. Use a target tracking scaling policy to scale the Auto Scaling group based on instance CPU utilization: This will allow the system to scale
up or down based on the CPU utilization of the EC2 instances in the Auto Scaling group. The solutions architect should use a target
tracking scaling policy to maintain a specific CPU utilization target and adjust the number of EC2 instances in the Auto Scaling group
accordingly.
upvoted 7 times
D) Use a target tracking scaling policy to scale the Auto Scaling group based on instance CPU utilization. This will allow the Auto Scaling
group to dynamically scale in and out based on demand.
E) Use scheduled scaling to change the Auto Scaling group capacity to zero on weekends when traffic is expected to be low. This will
minimize costs by terminating unused instances.
upvoted 3 times
A&D. Is not possible, way you can put an ALB capacity based in cpu and in request rate???? You need to select one or another option (and
this is for all questions here guys!)
upvoted 2 times
It is possible to set to zero. "is not required to operate on weekends" means the instances are not required during the weekends.
https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-capacity-limits.html
upvoted 2 times
Based on docs, ASG can't track ALB's request rate, so the answer is D&E
meanwhile ASG can track CPU rates.
upvoted 4 times
D. Use scheduled scaling to change the Auto Scaling group minimum, maximum, and desired capacity to zero for weekends. Revert to the
default values at the start of the week. This approach allows the Auto Scaling group to reduce the number of instances to zero during
weekends when traffic is expected to be low. It will help the organization to save costs by not paying for instances that are not needed
during weekends.
Therefore, options A and D are the correct answers. Options B and C are not relevant to the scenario, and option E is not a scalable
solution as it would require manual intervention to adjust the group capacity every week.
upvoted 1 times
Answers D and E:
D. Use a target tracking scaling policy to scale the Auto Scaling group based on instance CPU utilization.
E. Use scheduled scaling to change the Auto Scaling group minimum, maximum, and desired capacity to zero for weekends. Revert to the
default values at the start of the week.
- Answer D scales the Auto Scaling group based on instance CPU utilization, which ensures that the number of instances in the group can
be adjusted to handle the increase in traffic during working hours and reduce capacity during periods of low traffic.
- Answer E uses scheduled scaling to change the Auto Scaling group minimum, maximum, and desired capacity to zero for weekends,
which ensures that the Auto Scaling group scales down to zero during weekends to save costs.
upvoted 1 times
- Answer A adjusts the capacity of the ALB based on request rate, which ensures that the ALB can handle the increase in traffic during
working hours and reduce capacity during periods of low traffic.
- Answer E uses scheduled scaling to change the Auto Scaling group minimum, maximum, and desired capacity to zero for weekends,
which ensures that the Auto Scaling group scales down to zero during weekends to save costs.
upvoted 1 times
Answer E is a common choice for scaling down to zero during weekends to save costs. Both Answers D and A can be used in
conjunction with Answer E to ensure that the Auto Scaling group scales down to zero during weekends. However, Answer D provides
more granular control over the scaling of the Auto Scaling group based on instance CPU utilization, which can result in better
performance and cost optimization.
upvoted 2 times
A solutions architect is designing a two-tiered architecture that includes a public subnet and a database subnet. The web servers in the public
subnet must be open to the internet on port 443. The Amazon RDS for MySQL DB instance in the database subnet must be accessible only to the
web servers on port 3306.
Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)
A. Create a network ACL for the public subnet. Add a rule to deny outbound traffic to 0.0.0.0/0 on port 3306.
B. Create a security group for the DB instance. Add a rule to allow traffic from the public subnet CIDR block on port 3306.
C. Create a security group for the web servers in the public subnet. Add a rule to allow traffic from 0.0.0.0/0 on port 443.
D. Create a security group for the DB instance. Add a rule to allow traffic from the web servers’ security group on port 3306.
E. Create a security group for the DB instance. Add a rule to deny all traffic except traffic from the web servers’ security group on port 3306.
Correct Answer: CD
1. Create a security group for the web servers in the public subnet. Add a rule to allow traffic from 0.0.0.0/0 on port 443.
2. Create a security group for the DB instance. Add a rule to allow traffic from the web servers' security group on port 3306.
This will allow the web servers in the public subnet to receive traffic from the internet on port 443, and the Amazon RDS for MySQL DB
instance in the database subnet to receive traffic only from the web servers on port 3306.
upvoted 1 times
A company is implementing a shared storage solution for a gaming application that is hosted in the AWS Cloud. The company needs the ability to
use Lustre clients to access data. The solution must be fully managed.
A. Create an AWS DataSync task that shares the data as a mountable file system. Mount the file system to the application server.
B. Create an AWS Storage Gateway file gateway. Create a file share that uses the required client protocol. Connect the application server to the
file share.
C. Create an Amazon Elastic File System (Amazon EFS) file system, and configure it to support Lustre. Attach the file system to the origin
server. Connect the application server to the file system.
D. Create an Amazon FSx for Lustre file system. Attach the file system to the origin server. Connect the application server to the file system.
Correct Answer: C
Amazon FSx for Lustre is a fully managed file system that is optimized for compute-intensive workloads, such as high-performance
computing, machine learning, and gaming. It provides a POSIX-compliant file system that can be accessed using Lustre clients and offers
high performance, scalability, and data durability.
This solution provides a highly available, scalable, and fully managed shared storage solution that can be accessed using Lustre clients.
Amazon FSx for Lustre is optimized for compute-intensive workloads and provides high performance and durability.
upvoted 2 times
Answer B, creating an AWS Storage Gateway file gateway and connecting the application server to the file share, may not provide the
required performance and scalability for a gaming application.
Answer C, creating an Amazon Elastic File System (Amazon EFS) file system and configuring it to support Lustre, may not provide the
required performance and scalability for a gaming application and may require additional configuration and management overhead.
upvoted 1 times
Additionally, FSx for Lustre is a fully managed service, meaning that AWS takes care of all maintenance, updates, and patches for the file
system, which reduces the operational overhead required by the company.
upvoted 1 times
A company runs an application that receives data from thousands of geographically dispersed remote devices that use UDP. The application
processes the data immediately and sends a message back to the device if necessary. No data is stored.
The company needs a solution that minimizes latency for the data transmission from the devices. The solution also must provide rapid failover to
another AWS Region.
A. Configure an Amazon Route 53 failover routing policy. Create a Network Load Balancer (NLB) in each of the two Regions. Configure the NLB
to invoke an AWS Lambda function to process the data.
B. Use AWS Global Accelerator. Create a Network Load Balancer (NLB) in each of the two Regions as an endpoint. Create an Amazon Elastic
Container Service (Amazon ECS) cluster with the Fargate launch type. Create an ECS service on the cluster. Set the ECS service as the target
for the NLProcess the data in Amazon ECS.
C. Use AWS Global Accelerator. Create an Application Load Balancer (ALB) in each of the two Regions as an endpoint. Create an Amazon
Elastic Container Service (Amazon ECS) cluster with the Fargate launch type. Create an ECS service on the cluster. Set the ECS service as the
target for the ALB. Process the data in Amazon ECS.
D. Configure an Amazon Route 53 failover routing policy. Create an Application Load Balancer (ALB) in each of the two Regions. Create an
Amazon Elastic Container Service (Amazon ECS) cluster with the Fargate launch type. Create an ECS service on the cluster. Set the ECS
service as the target for the ALB. Process the data in Amazon ECS.
Correct Answer: B
Geographically dispersed (related to UDP) - Global Accelerator - multiple entrances worldwide to the AWS network to provide better
transfer rates.
UDP - NLB (Network Load Balancer).
upvoted 7 times
Global Accelerator provides UDP support and minimizes latency using the AWS global network.
Using NLBs allows the UDP traffic to be load balanced across Availability Zones.
ECS Fargate provides rapid scaling and failover across Regions.
NLB endpoints allow rapid failover if one Region goes down.
upvoted 1 times
AWS Global Accelerator is a service that improves the availability and performance of applications by using static IP addresses (Anycast) to
route traffic to optimal AWS endpoints. With Global Accelerator, you can direct traffic to multiple Regions and endpoints, and provide
automatic failover to another AWS Region.
upvoted 2 times
A solutions architect must migrate a Windows Internet Information Services (IIS) web application to AWS. The application currently relies on a file
share hosted in the user's on-premises network-attached storage (NAS). The solutions architect has proposed migrating the IIS web servers to
Amazon EC2 instances in multiple Availability Zones that are connected to the storage solution, and configuring an Elastic Load Balancer attached
to the instances.
Which replacement to the on-premises file share is MOST resilient and durable?
C. Migrate the file share to Amazon FSx for Windows File Server.
D. Migrate the file share to Amazon Elastic File System (Amazon EFS).
Correct Answer: A
Amazon FSx is a fully managed Windows file system service that is built on Windows Server and provides native support for the SMB
protocol. It is designed to be highly available and durable, with built-in backup and restore capabilities. It is also fully integrated with AWS
security services, providing encryption at rest and in transit, and it can be configured to meet compliance standards.
upvoted 3 times
Migrating the file share to Amazon EFS (Linux ONLY) could be an option, but Amazon FSx for Windows File Server would be more
appropriate in this case because it is specifically designed for Windows file shares and provides better performance for Windows
applications.
upvoted 3 times
A company is deploying a new application on Amazon EC2 instances. The application writes data to Amazon Elastic Block Store (Amazon EBS)
volumes. The company needs to ensure that all data that is written to the EBS volumes is encrypted at rest.
A. Create an IAM role that specifies EBS encryption. Attach the role to the EC2 instances.
B. Create the EBS volumes as encrypted volumes. Attach the EBS volumes to the EC2 instances.
C. Create an EC2 instance tag that has a key of Encrypt and a value of True. Tag all instances that require encryption at the EBS level.
D. Create an AWS Key Management Service (AWS KMS) key policy that enforces EBS encryption in the account. Ensure that the key policy is
active.
Correct Answer: B
When you create an EBS volume, you can specify whether to encrypt the volume. If you choose to encrypt the volume, all data written to
the volume is automatically encrypted at rest using AWS-managed keys. You can also use customer-managed keys (CMKs) stored in AWS
KMS to encrypt and protect your EBS volumes. You can create encrypted EBS volumes and attach them to EC2 instances to ensure that all
data written to the volumes is encrypted at rest.
Answer A is incorrect because attaching an IAM role to the EC2 instances does not automatically encrypt the EBS volumes.
Answer C is incorrect because adding an EC2 instance tag does not ensure that the EBS volumes are encrypted.
upvoted 5 times
A company has a web application with sporadic usage patterns. There is heavy usage at the beginning of each month, moderate usage at the start
of each week, and unpredictable usage during the week. The application consists of a web server and a MySQL database server running inside the
data center. The company would like to move the application to the AWS Cloud, and needs to select a cost-effective database platform that will
not require database modifications.
A. Amazon DynamoDB
Correct Answer: C
Aurora Serverless can be a cost-effective option for databases with sporadic or unpredictable usage patterns since it automatically scales
up or down based on the current workload. Additionally, Aurora Serverless is compatible with MySQL, so it does not require any
modifications to the application's database code.
upvoted 3 times
Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability.
DynamoDB is a good choice for applications that require low-latency data access¹.
MySQL-compatible Amazon Aurora Serverless is an on-demand, auto-scaling configuration for Amazon Aurora (MySQL-compatible
edition), where the database will automatically start up, shut down, and scale capacity up or down based on your application's needs³.
So, Amazon RDS for MySQL is the best option for your requirements.
upvoted 2 times
Amazon RDS for MySQL is a fully-managed relational database service that makes it easy to set up, operate, and scale MySQL
deployments in the cloud. Amazon Aurora Serverless is an on-demand, auto-scaling configuration for Amazon Aurora (MySQL-
compatible edition), where the database will automatically start up, shut down, and scale capacity up or down based on your
application’s needs. It is a simple, cost-effective option for infrequent, intermittent, or unpredictable workloads.
upvoted 2 times
An image-hosting company stores its objects in Amazon S3 buckets. The company wants to avoid accidental exposure of the objects in the S3
buckets to the public. All S3 objects in the entire AWS account need to remain private.
A. Use Amazon GuardDuty to monitor S3 bucket policies. Create an automatic remediation action rule that uses an AWS Lambda function to
remediate any change that makes the objects public.
B. Use AWS Trusted Advisor to find publicly accessible S3 buckets. Configure email notifications in Trusted Advisor when a change is
detected. Manually change the S3 bucket policy if it allows public access.
C. Use AWS Resource Access Manager to find publicly accessible S3 buckets. Use Amazon Simple Notification Service (Amazon SNS) to
invoke an AWS Lambda function when a change is detected. Deploy a Lambda function that programmatically remediates the change.
D. Use the S3 Block Public Access feature on the account level. Use AWS Organizations to create a service control policy (SCP) that prevents
IAM users from changing the setting. Apply the SCP to the account.
Correct Answer: D
An ecommerce company is experiencing an increase in user traffic. The company’s store is deployed on Amazon EC2 instances as a two-tier web
application consisting of a web tier and a separate database tier. As traffic increases, the company notices that the architecture is causing
significant delays in sending timely marketing and order confirmation email to users. The company wants to reduce the time it spends resolving
complex email delivery issues and minimize operational overhead.
A. Create a separate application tier using EC2 instances dedicated to email processing.
B. Configure the web instance to send email through Amazon Simple Email Service (Amazon SES).
C. Configure the web instance to send email through Amazon Simple Notification Service (Amazon SNS).
D. Create a separate application tier using EC2 instances dedicated to email processing. Place the instances in an Auto Scaling group.
Correct Answer: B
Answer A of creating a separate application tier for email processing may add additional complexity to the architecture and require more
operational overhead.
Answer C of using Amazon Simple Notification Service (Amazon SNS) is not an appropriate solution for sending marketing and order
confirmation emails since Amazon SNS is a messaging service that is designed to send messages to subscribed endpoints or clients.
Answer D of creating a separate application tier using EC2 instances dedicated to email processing placed in an Auto Scaling group is a
more complex solution than necessary and may result in additional operational overhead.
upvoted 2 times
A company has a business system that generates hundreds of reports each day. The business system saves the reports to a network share in CSV
format. The company needs to store this data in the AWS Cloud in near-real time for analysis.
Which solution will meet these requirements with the LEAST administrative overhead?
A. Use AWS DataSync to transfer the files to Amazon S3. Create a scheduled task that runs at the end of each day.
B. Create an Amazon S3 File Gateway. Update the business system to use a new network share from the S3 File Gateway.
C. Use AWS DataSync to transfer the files to Amazon S3. Create an application that uses the DataSync API in the automation workflow.
D. Deploy an AWS Transfer for SFTP endpoint. Create a script that checks for new files on the network share and uploads the new files by
using SFTP.
Correct Answer: C
Using DataSync avoids having to rewrite the business system to use a new file gateway or SFTP endpoint.
Calling the DataSync API from an application allows automating the data transfer instead of running scheduled tasks or scripts.
DataSync directly transfers files from the network share to S3 without needing an intermediate server
upvoted 1 times
B. Create an Amazon S3 File Gateway. Update the business system to use a new network share from the S3 File Gateway.
- It presents a simple network file share interface that the business system can write to, just like a standard network share. This requires
minimal changes to the business system.
- The S3 File Gateway automatically uploads all files written to the share to an S3 bucket in the background. This handles the transfer and
upload to S3 without requiring any scheduled tasks, scripts or automation.
- All ongoing management like monitoring, scaling, patching etc. is handled by AWS for the S3 File Gateway.
upvoted 2 times
A) AWS DataSync would require creating and managing scheduled tasks and monitoring them.
C) Using the DataSync API would require developing an application and then managing and monitoring it.
D) The SFTP option would require creating scripts, managing SFTP access and keys, and monitoring the file transfer process.
So overall, the S3 File Gateway requires the least amount of ongoing management and administration as it presents a simple file share
interface but handles the upload to S3 in a fully managed fashion. The business system can continue writing to a network share as is,
while the files are transparently uploaded to S3.
The S3 File Gateway is the most hands-off, low-maintenance solution in this scenario.
upvoted 2 times
To store the CSV reports generated by the business system in the AWS Cloud in near-real time for analysis, the best solution with the least
administrative overhead would be to use AWS DataSync to transfer the files to Amazon S3 and create an application that uses the
DataSync API in the automation workflow.
AWS DataSync is a fully managed service that makes it easy to automate and accelerate data transfer between on-premises storage
systems and AWS Cloud storage, such as Amazon S3. With DataSync, you can quickly and securely transfer large amounts of data to the
AWS Cloud, and you can automate the transfer process using the DataSync API.
upvoted 3 times
Answer B, creating an Amazon S3 File Gateway and updating the business system to use a new network share from the S3 File Gateway,
is not the best solution because it requires additional configuration and management overhead.
Answer D, deploying an AWS Transfer for the SFTP endpoint and creating a script to check for new files on the network share and
upload the new files using SFTP, is not the best solution because it requires additional scripting and management overhead
upvoted 1 times
A company is storing petabytes of data in Amazon S3 Standard. The data is stored in multiple S3 buckets and is accessed with varying frequency.
The company does not know access patterns for all the data. The company needs to implement a solution for each S3 bucket to optimize the cost
of S3 usage.
Which solution will meet these requirements with the MOST operational efficiency?
A. Create an S3 Lifecycle configuration with a rule to transition the objects in the S3 bucket to S3 Intelligent-Tiering.
B. Use the S3 storage class analysis tool to determine the correct tier for each object in the S3 bucket. Move each object to the identified
storage tier.
C. Create an S3 Lifecycle configuration with a rule to transition the objects in the S3 bucket to S3 Glacier Instant Retrieval.
D. Create an S3 Lifecycle configuration with a rule to transition the objects in the S3 bucket to S3 One Zone-Infrequent Access (S3 One Zone-
IA).
Correct Answer: A
Creating an S3 Lifecycle configuration with a rule to transition the objects in the S3 bucket to S3 Intelligent-Tiering would be the most
efficient solution to optimize the cost of S3 usage. S3 Intelligent-Tiering is a storage class that automatically moves objects between two
access tiers (frequent and infrequent) based on changing access patterns. It is a cost-effective solution that does not require any manual
intervention to move data to different storage classes, unlike the other options.
upvoted 2 times
Answer C, Transitioning objects to S3 Glacier Instant Retrieval would be appropriate for data that is accessed less frequently and does
not require immediate access.
Answer D, S3 One Zone-IA would be appropriate for data that can be recreated if lost and does not require the durability of S3 Standard
or S3 Standard-IA.
upvoted 1 times
Why?
"S3 Intelligent-Tiering is the ideal storage class for data with unknown, changing, or unpredictable access patterns"
https://aws.amazon.com/s3/storage-classes/intelligent-tiering/
upvoted 2 times
A rapidly growing global ecommerce company is hosting its web application on AWS. The web application includes static content and dynamic
content. The website stores online transaction processing (OLTP) data in an Amazon RDS database The website’s users are experiencing slow
page loads.
Which combination of actions should a solutions architect take to resolve this issue? (Choose two.)
Correct Answer: BD
Configuring an Amazon Redshift cluster is not relevant to this issue since Redshift is a data warehousing service and is typically used for
the analytical processing of large amounts of data.
Hosting the dynamic web content in Amazon S3 may not necessarily improve performance since S3 is an object storage service, not a web
application server. While S3 can be used to host static web content, it may not be suitable for hosting dynamic web content since S3
doesn't support server-side scripting or processing.
Configuring a Multi-AZ deployment for the RDS DB instance will improve high availability but may not necessarily improve performance.
upvoted 7 times
and
Explanation:
CloudFront can cache static content globally and improve latency for static content delivery.
Multi-AZ RDS improves performance and availability of the database driving dynamic content.
upvoted 1 times
To resolve this issue, a solutions architect should take the following two actions:
Create a read replica for the RDS DB instance. This will help to offload read traffic from the primary database instance and improve
performance.
upvoted 2 times
A company uses Amazon EC2 instances and AWS Lambda functions to run its application. The company has VPCs with public subnets and private
subnets in its AWS account. The EC2 instances run in a private subnet in one of the VPCs. The Lambda functions need direct network access to
the EC2 instances for the application to work.
The application will run for at least 1 year. The company expects the number of Lambda functions that the application uses to increase during that
time. The company wants to maximize its savings on all application resources and to keep network latency between the services low.
A. Purchase an EC2 Instance Savings Plan Optimize the Lambda functions’ duration and memory usage and the number of invocations.
Connect the Lambda functions to the private subnet that contains the EC2 instances.
B. Purchase an EC2 Instance Savings Plan Optimize the Lambda functions' duration and memory usage, the number of invocations, and the
amount of data that is transferred. Connect the Lambda functions to a public subnet in the same VPC where the EC2 instances run.
C. Purchase a Compute Savings Plan. Optimize the Lambda functions’ duration and memory usage, the number of invocations, and the
amount of data that is transferred. Connect the Lambda functions to the private subnet that contains the EC2 instances.
D. Purchase a Compute Savings Plan. Optimize the Lambda functions’ duration and memory usage, the number of invocations, and the
amount of data that is transferred. Keep the Lambda functions in the Lambda service VPC.
Correct Answer: C
By purchasing a Compute Savings Plan, the company can save on the costs of running both EC2 instances and Lambda functions. The
Lambda functions can be connected to the private subnet that contains the EC2 instances through a VPC endpoint for AWS services or a
VPC peering connection. This provides direct network access to the EC2 instances while keeping the traffic within the private network,
which helps to minimize network latency.
Optimizing the Lambda functions’ duration, memory usage, number of invocations, and amount of data transferred can help to further
minimize costs and improve performance. Additionally, using a private subnet helps to ensure that the EC2 instances are not directly
accessible from the public internet, which is a security best practice.
upvoted 7 times
Answer B is not the best solution because connecting the Lambda functions to a public subnet may not be as secure as connecting
them to a private subnet. Also, keeping the EC2 instances in a private subnet helps to ensure that they are not directly accessible from
the public internet.
Answer D is not the best solution because keeping the Lambda functions in the Lambda service VPC may not provide direct network
access to the EC2 instances, which may impact the performance of the application.
upvoted 2 times
Lambda functions need direct network access to the EC2 instances for the application to work and these EC2 instances are in the
private subnet. So the correct answer is C.
upvoted 1 times
Question #418 Topic 1
A solutions architect needs to allow team members to access Amazon S3 buckets in two different AWS accounts: a development account and a
production account. The team currently has access to S3 buckets in the development account by using unique IAM users that are assigned to an
IAM group that has appropriate permissions in the account.
The solutions architect has created an IAM role in the production account. The role has a policy that grants access to an S3 bucket in the
production account.
Which solution will meet these requirements while complying with the principle of least privilege?
B. Add the development account as a principal in the trust policy of the role in the production account.
C. Turn off the S3 Block Public Access feature on the S3 bucket in the production account.
D. Create a user in the production account with unique credentials for each team member.
Correct Answer: B
This allows cross-account access to the S3 bucket in the production account by assuming the IAM role. The development account users
can assume the role to gain temporary access to the production bucket.
upvoted 1 times
An AWS account accesses another AWS account – This use case is commonly referred to as a cross-account role pattern. It allows human
or machine IAM principals from one AWS account to assume this role and act on resources within a second AWS account. A role is
assumed to enable this behavior when the resource in the target account doesn’t have a resource-based policy that could be used to grant
cross-account access.
upvoted 1 times
Answer C, turning off the S3 Block Public Access feature, is not a recommended solution as it is a security best practice to enable S3 Block
Public Access to prevent accidental public access to S3 buckets.
Answer D, creating a user in the production account with unique credentials for each team member, is also not a recommended solution
as it can be difficult to manage and scale for large teams. It is also less secure, as individual user credentials can be more easily
compromised.
upvoted 2 times
Option A is not recommended because it grants too much access to development account users. Option C is not relevant to this scenario.
Option D is not recommended because it does not comply with the principle of least privilege.
upvoted 1 times
A company uses AWS Organizations with all features enabled and runs multiple Amazon EC2 workloads in the ap-southeast-2 Region. The
company has a service control policy (SCP) that prevents any resources from being created in any other Region. A security policy requires the
company to encrypt all data at rest.
An audit discovers that employees have created Amazon Elastic Block Store (Amazon EBS) volumes for EC2 instances without encrypting the
volumes. The company wants any new EC2 instances that any IAM user or root user launches in ap-southeast-2 to use encrypted EBS volumes.
The company wants a solution that will have minimal effect on employees who create EBS volumes.
A. In the Amazon EC2 console, select the EBS encryption account attribute and define a default encryption key.
B. Create an IAM permission boundary. Attach the permission boundary to the root organizational unit (OU). Define the boundary to deny the
ec2:CreateVolume action when the ec2:Encrypted condition equals false.
C. Create an SCP. Attach the SCP to the root organizational unit (OU). Define the SCP to deny the ec2:CreateVolume action whenthe
ec2:Encrypted condition equals false.
D. Update the IAM policies for each account to deny the ec2:CreateVolume action when the ec2:Encrypted condition equals false.
E. In the Organizations management account, specify the Default EBS volume encryption setting.
Correct Answer: AD
Option (C): Creating an SCP and attaching it to the root organizational unit (OU) will deny the ec2:CreateVolume action when the
ec2:Encrypted condition equals false. This means that any IAM user or root user in any account in the organization will not be able to
create an EBS volume without encrypting it.
Option (E): Specifying the Default EBS volume encryption setting in the Organizations management account will ensure that all new EBS
volumes created in any account in the organization are encrypted by default.
upvoted 1 times
A company wants to use an Amazon RDS for PostgreSQL DB cluster to simplify time-consuming database administrative tasks for production
database workloads. The company wants to ensure that its database is highly available and will provide automatic failover support in most
scenarios in less than 40 seconds. The company wants to offload reads off of the primary instance and keep costs as low as possible.
A. Use an Amazon RDS Multi-AZ DB instance deployment. Create one read replica and point the read workload to the read replica.
B. Use an Amazon RDS Multi-AZ DB duster deployment Create two read replicas and point the read workload to the read replicas.
C. Use an Amazon RDS Multi-AZ DB instance deployment. Point the read workload to the secondary instances in the Multi-AZ pair.
D. Use an Amazon RDS Multi-AZ DB cluster deployment Point the read workload to the reader endpoint.
Correct Answer: A
Explanation:
The company wants high availability, automatic failover support in less than 40 seconds, read offloading from the primary instance, and
cost-effectiveness.
1. Amazon RDS Multi-AZ deployments provide high availability and automatic failover support.
2. In a Multi-AZ DB cluster, Amazon RDS automatically provisions and maintains a standby in a different Availability Zone. If a failure
occurs, Amazon RDS performs an automatic failover to the standby, minimizing downtime.
3. The "Reader endpoint" for an Amazon RDS DB cluster provides load-balancing support for read-only connections to the DB cluster.
Directing read traffic to the reader endpoint helps in offloading read operations from the primary instance.
upvoted 6 times
Multi-AZ DB clusters typically have lower write latency when compared to Multi-AZ DB instance deployments. They also allow read-only
workloads to run on reader DB instances.
upvoted 1 times
Amazon RDS Multi-AZ with two readable standbys. Automatically fail over in typically under 35 seconds
https://aws.amazon.com/rds/features/multi-az/
upvoted 2 times
Amazon RDS Multi-AZ with two readable standbys. Automatically fail over in typically under 35 seconds
https://aws.amazon.com/rds/features/multi-az/
upvoted 1 times
A company runs a highly available SFTP service. The SFTP service uses two Amazon EC2 Linux instances that run with elastic IP addresses to
accept traffic from trusted IP sources on the internet. The SFTP service is backed by shared storage that is attached to the instances. User
accounts are created and managed as Linux users in the SFTP servers.
The company wants a serverless option that provides high IOPS performance and highly configurable security. The company also wants to
maintain control over user permissions.
A. Create an encrypted Amazon Elastic Block Store (Amazon EBS) volume. Create an AWS Transfer Family SFTP service with a public endpoint
that allows only trusted IP addresses. Attach the EBS volume to the SFTP service endpoint. Grant users access to the SFTP service.
B. Create an encrypted Amazon Elastic File System (Amazon EFS) volume. Create an AWS Transfer Family SFTP service with elastic IP
addresses and a VPC endpoint that has internet-facing access. Attach a security group to the endpoint that allows only trusted IP addresses.
Attach the EFS volume to the SFTP service endpoint. Grant users access to the SFTP service.
C. Create an Amazon S3 bucket with default encryption enabled. Create an AWS Transfer Family SFTP service with a public endpoint that
allows only trusted IP addresses. Attach the S3 bucket to the SFTP service endpoint. Grant users access to the SFTP service.
D. Create an Amazon S3 bucket with default encryption enabled. Create an AWS Transfer Family SFTP service with a VPC endpoint that has
internal access in a private subnet. Attach a security group that allows only trusted IP addresses. Attach the S3 bucket to the SFTP service
endpoint. Grant users access to the SFTP service.
Correct Answer: C
A)Transfer SFTP doesn’t support EBS, not for share data, & not serverless: infeasible.
B)EFS mounts via ENIs not endpts: infeasible.
D)pub endpt for internet access is missing: infeasible.
upvoted 3 times
omoakin 4 months ago
BBBBBBBBBBBBBB
upvoted 1 times
A company is developing a new machine learning (ML) model solution on AWS. The models are developed as independent microservices that
fetch approximately 1 GB of model data from Amazon S3 at startup and load the data into memory. Users access the models through an
asynchronous API. Users can send a request or a batch of requests and specify where the results should be sent.
The company provides models to hundreds of users. The usage patterns for the models are irregular. Some models could be unused for days or
weeks. Other models could receive batches of thousands of requests at a time.
A. Direct the requests from the API to a Network Load Balancer (NLB). Deploy the models as AWS Lambda functions that are invoked by the
NLB.
B. Direct the requests from the API to an Application Load Balancer (ALB). Deploy the models as Amazon Elastic Container Service (Amazon
ECS) services that read from an Amazon Simple Queue Service (Amazon SQS) queue. Use AWS App Mesh to scale the instances of the ECS
cluster based on the SQS queue size.
C. Direct the requests from the API into an Amazon Simple Queue Service (Amazon SQS) queue. Deploy the models as AWS Lambda functions
that are invoked by SQS events. Use AWS Auto Scaling to increase the number of vCPUs for the Lambda functions based on the SQS queue
size.
D. Direct the requests from the API into an Amazon Simple Queue Service (Amazon SQS) queue. Deploy the models as Amazon Elastic
Container Service (Amazon ECS) services that read from the queue. Enable AWS Auto Scaling on Amazon ECS for both the cluster and copies
of the service based on the queue size.
Correct Answer: D
A solutions architect wants to use the following JSON text as an identity-based policy to grant specific permissions:
Which IAM principals can the solutions architect attach this policy to? (Choose two.)
A. Role
B. Group
C. Organization
Correct Answer: AB
A company is running a custom application on Amazon EC2 On-Demand Instances. The application has frontend nodes that need to run 24 hours
a day, 7 days a week and backend nodes that need to run only for a short time based on workload. The number of backend nodes varies during the
day.
The company needs to scale out and scale in more instances based on workload.
A. Use Reserved Instances for the frontend nodes. Use AWS Fargate for the backend nodes.
B. Use Reserved Instances for the frontend nodes. Use Spot Instances for the backend nodes.
C. Use Spot Instances for the frontend nodes. Use Reserved Instances for the backend nodes.
D. Use Spot Instances for the frontend nodes. Use AWS Fargate for the backend nodes.
Correct Answer: B
Frontend nodes that need to run 24 hours a day, 7 days a week = Reserved Instances
Backend nodes run only for a short time = Spot Instances
upvoted 1 times
A company uses high block storage capacity to runs its workloads on premises. The company's daily peak input and output transactions per
second are not more than 15,000 IOPS. The company wants to migrate the workloads to Amazon EC2 and to provision disk performance
independent of storage capacity.
Which Amazon Elastic Block Store (Amazon EBS) volume type will meet these requirements MOST cost-effectively?
Correct Answer: C
GP3 volumes allow you to provision performance independently from storage capacity, which means you can adjust the baseline
performance (measured in IOPS) and throughput (measured in MiB/s) separately from the volume size. This flexibility allows you to
optimize your costs while meeting the workload requirements.
In this case, since the company's daily peak input and output transactions per second are not more than 15,000 IOPS, GP3 volumes
provide a suitable and cost-effective option for their workloads.
upvoted 1 times
You can only chose IOPS independetly with IO family and IO2 is in general better then IO1.
upvoted 1 times
A company needs to store data from its healthcare application. The application’s data frequently changes. A new regulation requires audit access
at all levels of the stored data.
The company hosts the application on an on-premises infrastructure that is running out of storage capacity. A solutions architect must securely
migrate the existing data to AWS while satisfying the new regulation.
A. Use AWS DataSync to move the existing data to Amazon S3. Use AWS CloudTrail to log data events.
B. Use AWS Snowcone to move the existing data to Amazon S3. Use AWS CloudTrail to log management events.
C. Use Amazon S3 Transfer Acceleration to move the existing data to Amazon S3. Use AWS CloudTrail to log data events.
D. Use AWS Storage Gateway to move the existing data to Amazon S3. Use AWS CloudTrail to log management events.
Correct Answer: B
Enabling AWS CloudTrail logging of management events will capture the required audit data for all API actions taken on the S3 bucket and
objects.
upvoted 1 times
In this scenario, the company needs to securely migrate its healthcare application data to AWS while satisfying the new regulation for
audit access. By using AWS DataSync, the existing data can be securely transferred to Amazon S3, ensuring the data is stored in a
scalable and durable storage service.
Additionally, using AWS CloudTrail to log data events ensures that all access and activity related to the data stored in Amazon S3 is
audited. This helps meet the regulatory requirement for audit access at all levels of the stored data.
upvoted 1 times
AWS DataSync is a service designed specifically for securely and efficiently transferring large amounts of data between on-premises
storage systems and AWS services like Amazon S3. It provides a reliable and optimized way to migrate data while maintaining data
integrity.
AWS CloudTrail, on the other hand, is a service that logs and monitors management events in your AWS account. While it can capture data
events for certain services, its primary focus is on tracking management actions like API calls and configuration changes.
Therefore, using AWS DataSync to transfer the existing data to Amazon S3 and leveraging AWS CloudTrail to log data events aligns with
the requirement of securely migrating the data and ensuring audit access at all levels, as specified by the new regulation.
upvoted 1 times
By using AWS DataSync, you can securely transfer the data from the on-premises infrastructure to Amazon S3, meeting the requirement
for securely migrating the data. Additionally, AWS CloudTrail can be used to log data events, allowing audit access at all levels of the stored
data.
upvoted 1 times
A solutions architect is implementing a complex Java application with a MySQL database. The Java application must be deployed on Apache
Tomcat and must be highly available.
A. Deploy the application in AWS Lambda. Configure an Amazon API Gateway API to connect with the Lambda functions.
B. Deploy the application by using AWS Elastic Beanstalk. Configure a load-balanced environment and a rolling deployment policy.
C. Migrate the database to Amazon ElastiCache. Configure the ElastiCache security group to allow access from the application.
D. Launch an Amazon EC2 instance. Install a MySQL server on the EC2 instance. Configure the application on the server. Create an AMI. Use
the AMI to create a launch template with an Auto Scaling group.
Correct Answer: B
A serverless application uses Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. The Lambda function needs permissions to read and
write to the DynamoDB table.
Which solution will give the Lambda function access to the DynamoDB table MOST securely?
A. Create an IAM user with programmatic access to the Lambda function. Attach a policy to the user that allows read and write access to the
DynamoDB table. Store the access_key_id and secret_access_key parameters as part of the Lambda environment variables. Ensure that other
AWS users do not have read and write access to the Lambda function configuration.
B. Create an IAM role that includes Lambda as a trusted service. Attach a policy to the role that allows read and write access to the
DynamoDB table. Update the configuration of the Lambda function to use the new role as the execution role.
C. Create an IAM user with programmatic access to the Lambda function. Attach a policy to the user that allows read and write access to the
DynamoDB table. Store the access_key_id and secret_access_key parameters in AWS Systems Manager Parameter Store as secure string
parameters. Update the Lambda function code to retrieve the secure string parameters before connecting to the DynamoDB table.
D. Create an IAM role that includes DynamoDB as a trusted service. Attach a policy to the role that allows read and write access from the
Lambda function. Update the code of the Lambda function to attach to the new role as an execution role.
Correct Answer: B
The following IAM policy is attached to an IAM group. This is the only policy applied to the group.
What are the effective IAM permissions of this policy for group members?
A. Group members are permitted any Amazon EC2 action within the us-east-1 Region. Statements after the Allow permission are not applied.
B. Group members are denied any Amazon EC2 permissions in the us-east-1 Region unless they are logged in with multi-factor authentication
(MFA).
C. Group members are allowed the ec2:StopInstances and ec2:TerminateInstances permissions for all Regions when logged in with multi-
factor authentication (MFA). Group members are permitted any other Amazon EC2 action.
D. Group members are allowed the ec2:StopInstances and ec2:TerminateInstances permissions for the us-east-1 Region only when logged in
with multi-factor authentication (MFA). Group members are permitted any other Amazon EC2 action within the us-east-1 Region.
Correct Answer: D
B. "denied any Amazon EC2 permissions in the us-east-1 Region" --> Wrong. Just deny 2 items.
C. "allowed the ec2:StopInstances and ec2:TerminateInstances permissions for all Regions" --> Wrong. Just region us-east-1.
D. ok.
upvoted 1 times
A manufacturing company has machine sensors that upload .csv files to an Amazon S3 bucket. These .csv files must be converted into images
and must be made available as soon as possible for the automatic generation of graphical reports.
The images become irrelevant after 1 month, but the .csv files must be kept to train machine learning (ML) models twice a year. The ML trainings
and audits are planned weeks in advance.
Which combination of steps will meet these requirements MOST cost-effectively? (Choose two.)
A. Launch an Amazon EC2 Spot Instance that downloads the .csv files every hour, generates the image files, and uploads the images to the S3
bucket.
B. Design an AWS Lambda function that converts the .csv files into images and stores the images in the S3 bucket. Invoke the Lambda
function when a .csv file is uploaded.
C. Create S3 Lifecycle rules for .csv files and image files in the S3 bucket. Transition the .csv files from S3 Standard to S3 Glacier 1 day after
they are uploaded. Expire the image files after 30 days.
D. Create S3 Lifecycle rules for .csv files and image files in the S3 bucket. Transition the .csv files from S3 Standard to S3 One Zone-Infrequent
Access (S3 One Zone-IA) 1 day after they are uploaded. Expire the image files after 30 days.
E. Create S3 Lifecycle rules for .csv files and image files in the S3 bucket. Transition the .csv files from S3 Standard to S3 Standard-Infrequent
Access (S3 Standard-IA) 1 day after they are uploaded. Keep the image files in Reduced Redundancy Storage (RRS).
Correct Answer: BC
https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-transition-general-
considerations.html#:~:text=Before%20you%20transition%20objects%20to%20S3%20Standard%2DIA%20or%20S3%20One%20Zone%2DI
A%2C%20you%20must%20store%20them%20for%20at%20least%2030%20days%20in%20Amazon%20S3
upvoted 1 times
B. CORRECT
C. CORRECT
D. Why Store on S3 One Zone-Infrequent Access (S3 One Zone-IA) when the files are going to irrelevant after 1 month? (Availability 99.99%
- consider cost)
E. again, Why use Reduced Redundancy Storage (RRS) when the files are irrelevant after 1 month? (Availability 99.99% - consider cost)
upvoted 2 times
vesen22 4 months ago
Selected Answer: BC
https://docs.aws.amazon.com/amazonglacier/latest/dev/introduction.html
upvoted 3 times
A company has developed a new video game as a web application. The application is in a three-tier architecture in a VPC with Amazon RDS for
MySQL in the database layer. Several players will compete concurrently online. The game’s developers want to display a top-10 scoreboard in near-
real time and offer the ability to stop and restore the game while preserving the current scores.
A. Set up an Amazon ElastiCache for Memcached cluster to cache the scores for the web application to display.
B. Set up an Amazon ElastiCache for Redis cluster to compute and cache the scores for the web application to display.
C. Place an Amazon CloudFront distribution in front of the web application to cache the scoreboard in a section of the application.
D. Create a read replica on Amazon RDS for MySQL to run queries to compute the scoreboard and serve the read traffic to the web application.
Correct Answer: B
An ecommerce company wants to use machine learning (ML) algorithms to build and train models. The company will use the models to visualize
complex scenarios and to detect trends in customer data. The architecture team wants to integrate its ML models with a reporting platform to
analyze the augmented data and use the data directly in its business intelligence dashboards.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS Glue to create an ML transform to build and train models. Use Amazon OpenSearch Service to visualize the data.
B. Use Amazon SageMaker to build and train models. Use Amazon QuickSight to visualize the data.
C. Use a pre-built ML Amazon Machine Image (AMI) from the AWS Marketplace to build and train models. Use Amazon OpenSearch Service to
visualize the data.
D. Use Amazon QuickSight to build and train models by using calculated fields. Use Amazon QuickSight to visualize the data.
Correct Answer: B
A company is running its production and nonproduction environment workloads in multiple AWS accounts. The accounts are in an organization in
AWS Organizations. The company needs to design a solution that will prevent the modification of cost usage tags.
A. Create a custom AWS Config rule to prevent tag modification except by authorized principals.
C. Create a service control policy (SCP) to prevent tag modification except by authorized principals.
Correct Answer: C
Amazon Organziaton has "Service Control Policy (SCP)" with "tag policy"
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_tag-policies.html . Choose C.
AWS Config for technical stuff, not for tag policies. Not A.
upvoted 1 times
A company hosts its application in the AWS Cloud. The application runs on Amazon EC2 instances behind an Elastic Load Balancer in an Auto
Scaling group and with an Amazon DynamoDB table. The company wants to ensure the application can be made available in anotherAWS Region
with minimal downtime.
What should a solutions architect do to meet these requirements with the LEAST amount of downtime?
A. Create an Auto Scaling group and a load balancer in the disaster recovery Region. Configure the DynamoDB table as a global table.
Configure DNS failover to point to the new disaster recovery Region's load balancer.
B. Create an AWS CloudFormation template to create EC2 instances, load balancers, and DynamoDB tables to be launched when needed
Configure DNS failover to point to the new disaster recovery Region's load balancer.
C. Create an AWS CloudFormation template to create EC2 instances and a load balancer to be launched when needed. Configure the
DynamoDB table as a global table. Configure DNS failover to point to the new disaster recovery Region's load balancer.
D. Create an Auto Scaling group and load balancer in the disaster recovery Region. Configure the DynamoDB table as a global table. Create an
Amazon CloudWatch alarm to trigger an AWS Lambda function that updates Amazon Route 53 pointing to the disaster recovery load balancer.
Correct Answer: A
By leveraging an Amazon CloudWatch alarm, Option D allows for an automated failover mechanism. When triggered, the CloudWatch
alarm can execute an AWS Lambda function, which in turn can update the DNS records in Amazon Route 53 to redirect traffic to the
disaster recovery load balancer in the new Region. This automation helps reduce the potential for human error and further minimizes
downtime.
Answer is D
upvoted 2 times
D
upvoted 1 times
A company needs to migrate a MySQL database from its on-premises data center to AWS within 2 weeks. The database is 20 TB in size. The
company wants to complete the migration with minimal downtime.
A. Order an AWS Snowball Edge Storage Optimized device. Use AWS Database Migration Service (AWS DMS) with AWS Schema Conversion
Tool (AWS SCT) to migrate the database with replication of ongoing changes. Send the Snowball Edge device to AWS to finish the migration
and continue the ongoing replication.
B. Order an AWS Snowmobile vehicle. Use AWS Database Migration Service (AWS DMS) with AWS Schema Conversion Tool (AWS SCT) to
migrate the database with ongoing changes. Send the Snowmobile vehicle back to AWS to finish the migration and continue the ongoing
replication.
C. Order an AWS Snowball Edge Compute Optimized with GPU device. Use AWS Database Migration Service (AWS DMS) with AWS Schema
Conversion Tool (AWS SCT) to migrate the database with ongoing changes. Send the Snowball device to AWS to finish the migration and
continue the ongoing replication
D. Order a 1 GB dedicated AWS Direct Connect connection to establish a connection with the data center. Use AWS Database Migration
Service (AWS DMS) with AWS Schema Conversion Tool (AWS SCT) to migrate the database with replication of ongoing changes.
Correct Answer: D
A company moved its on-premises PostgreSQL database to an Amazon RDS for PostgreSQL DB instance. The company successfully launched a
new product. The workload on the database has increased. The company wants to accommodate the larger workload without adding
infrastructure.
A. Buy reserved DB instances for the total workload. Make the Amazon RDS for PostgreSQL DB instance larger.
C. Buy reserved DB instances for the total workload. Add another Amazon RDS for PostgreSQL DB instance.
Correct Answer: A
Making the RDS PostgreSQL instance Multi-AZ adds a standby replica to handle larger workloads and provides high availability.
Even though it adds infrastructure, the cost is less than doubling the infrastructure with a separate DB instance.
It provides better performance, availability, and disaster recovery than a single larger instance.
upvoted 2 times
Therefore, the recommended solution is Option C: Buy reserved DB instances for the workload and add another Amazon RDS for
PostgreSQL DB instance to accommodate the increased workload in a cost-effective manner.
upvoted 1 times
A company operates an ecommerce website on Amazon EC2 instances behind an Application Load Balancer (ALB) in an Auto Scaling group. The
site is experiencing performance issues related to a high request rate from illegitimate external systems with changing IP addresses. The security
team is worried about potential DDoS attacks against the website. The company must block the illegitimate incoming requests in a way that has a
minimal impact on legitimate users.
B. Deploy AWS WAF, associate it with the ALB, and configure a rate-limiting rule.
C. Deploy rules to the network ACLs associated with the ALB to block the incomingtraffic.
D. Deploy Amazon GuardDuty and enable rate-limiting protection when configuring GuardDuty.
Correct Answer: B
Question keyword "high request rate", answer keyword "rate-limiting rule" https://docs.aws.amazon.com/waf/latest/developerguide/waf-
rate-based-example-limit-login-page-keys.html
In this scenario, the company is facing performance issues due to a high request rate from illegitimate external systems with changing IP
addresses. By configuring a rate-limiting rule in AWS WAF, the company can restrict the number of requests coming from each IP address,
preventing excessive traffic from overwhelming the website. This will help mitigate the impact of potential DDoS attacks and ensure that
legitimate users can access the site without interruption.
upvoted 3 times
https://aws.amazon.com/blogs/security/how-to-use-amazon-guardduty-and-aws-web-application-firewall-to-automatically-block-
suspicious-hosts/
upvoted 2 times
A company wants to share accounting data with an external auditor. The data is stored in an Amazon RDS DB instance that resides in a private
subnet. The auditor has its own AWS account and requires its own copy of the database.
What is the MOST secure way for the company to share the database with the auditor?
A. Create a read replica of the database. Configure IAM standard database authentication to grant the auditor access.
B. Export the database contents to text files. Store the files in an Amazon S3 bucket. Create a new IAM user for the auditor. Grant the user
access to the S3 bucket.
C. Copy a snapshot of the database to an Amazon S3 bucket. Create an IAM user. Share the user's keys with the auditor to grant access to the
object in the S3 bucket.
D. Create an encrypted snapshot of the database. Share the snapshot with the auditor. Allow access to the AWS Key Management Service
(AWS KMS) encryption key.
Correct Answer: D
By creating an encrypted snapshot, the company ensures that the database data is protected at rest. Sharing the encrypted snapshot with
the auditor allows them to have their own copy of the database securely.
In addition, granting access to the AWS KMS encryption key ensures that the auditor has the necessary permissions to decrypt and access
the encrypted snapshot. This allows the auditor to restore the snapshot and access the data securely.
This approach provides both data protection and access control, ensuring that the database is securely shared with the auditor while
maintaining the confidentiality and integrity of the data.
upvoted 5 times
A solutions architect configured a VPC that has a small range of IP addresses. The number of Amazon EC2 instances that are in the VPC is
increasing, and there is an insufficient number of IP addresses for future workloads.
Which solution resolves this issue with the LEAST operational overhead?
A. Add an additional IPv4 CIDR block to increase the number of IP addresses and create additional subnets in the VPC. Create new resources
in the new subnets by using the new CIDR.
B. Create a second VPC with additional subnets. Use a peering connection to connect the second VPC with the first VPC Update the routes
and create new resources in the subnets of the second VPC.
C. Use AWS Transit Gateway to add a transit gateway and connect a second VPC with the first VPUpdate the routes of the transit gateway and
VPCs. Create new resources in the subnets of the second VPC.
D. Create a second VPC. Create a Site-to-Site VPN connection between the first VPC and the second VPC by using a VPN-hosted solution on
Amazon EC2 and a virtual private gateway. Update the route between VPCs to the traffic through the VPN. Create new resources in the subnets
of the second VPC.
Correct Answer: A
A company used an Amazon RDS for MySQL DB instance during application testing. Before terminating the DB instance at the end of the test
cycle, a solutions architect created two backups. The solutions architect created the first backup by using the mysqldump utility to create a
database dump. The solutions architect created the second backup by enabling the final DB snapshot option on RDS termination.
The company is now planning for a new test cycle and wants to create a new DB instance from the most recent backup. The company has chosen
a MySQL-compatible edition ofAmazon Aurora to host the DB instance.
B. Upload the RDS snapshot to Amazon S3. Then import the RDS snapshot into Aurora.
C. Upload the database dump to Amazon S3. Then import the database dump into Aurora.
D. Use AWS Database Migration Service (AWS DMS) to import the RDS snapshot into Aurora.
E. Upload the database dump to Amazon S3. Then use AWS Database Migration Service (AWS DMS) to import the database dump into Aurora.
Correct Answer: AD
The RDS DB snapshot contains backup data in a proprietary format that cannot be directly imported into Aurora.
The mysqldump database dump contains SQL statements that can be imported into Aurora after uploading to S3.
AWS DMS can migrate the dump file from S3 into Aurora.
upvoted 1 times
Exclude B, because no need upload DB snapshot to Amazon S3. Exclude D, because no need Migration service. Exclude E, because no need
Migration service. Use exclusion method is more easy for this question.
Related links:
- Amazon RDS create database snapshot https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateSnapshot.html
- https://aws.amazon.com/rds/aurora/
upvoted 1 times
The RDS DB snapshot contains backup data in a proprietary format that cannot be directly imported into Aurora.
The mysqldump database dump contains SQL statements that can be imported into Aurora after uploading to S3.
AWS DMS can migrate the dump file from S3 into Aurora.
upvoted 1 times
You can copy the full and incremental backup files from your source MySQL version 5.7 database to an Amazon S3 bucket, and then
restore an Amazon Aurora MySQL DB cluster from those files.
This option can be considerably faster than migrating data using mysqldump, because using mysqldump replays all of the commands to
recreate the schema and data from your source database in your new Aurora MySQL DB cluster.
By copying your source MySQL data files, Aurora MySQL can immediately use those files as the data for an Aurora MySQL DB cluster.
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Migrating.ExtMySQL.html
upvoted 2 times
c>- Because Amazon Aurora MySQL is a MySQL-compatible database, you can use the mysqldump utility to copy data from your MySQL or
MariaDB database to an existing Amazon Aurora MySQL DB cluster.
B.- You can copy the source files from your source MySQL version 5.5, 5.6, or 5.7 database to an Amazon S3 bucket, and then restore an
Amazon Aurora MySQL DB cluster from those files.
upvoted 2 times
A company hosts a multi-tier web application on Amazon Linux Amazon EC2 instances behind an Application Load Balancer. The instances run in
an Auto Scaling group across multiple Availability Zones. The company observes that the Auto Scaling group launches more On-Demand
Instances when the application's end users access high volumes of static web content. The company wants to optimize cost.
A. Update the Auto Scaling group to use Reserved Instances instead of On-Demand Instances.
B. Update the Auto Scaling group to scale by launching Spot Instances instead of On-Demand Instances.
C. Create an Amazon CloudFront distribution to host the static web contents from an Amazon S3 bucket.
D. Create an AWS Lambda function behind an Amazon API Gateway API to host the static website contents.
Correct Answer: C
A company stores several petabytes of data across multiple AWS accounts. The company uses AWS Lake Formation to manage its data lake. The
company's data science team wants to securely share selective data from its accounts with the company's engineering team for analytical
purposes.
Which solution will meet these requirements with the LEAST operational overhead?
A. Copy the required data to a common account. Create an IAM access role in that account. Grant access by specifying a permission policy
that includes users from the engineering team accounts as trusted entities.
B. Use the Lake Formation permissions Grant command in each account where the data is stored to allow the required engineering team users
to access the data.
C. Use AWS Data Exchange to privately publish the required data to the required engineering team accounts.
D. Use Lake Formation tag-based access control to authorize and grant cross-account permissions for the required data to the engineering
team accounts.
Correct Answer: D
Using Lake Formation tag-based access control allows granting cross-account permissions to access data in other accounts based on tags,
without having to copy data or configure individual permissions in each account.
This provides a centralized, tag-based way to share selective data across accounts to authorized users with least operational overhead.
upvoted 1 times
A company wants to host a scalable web application on AWS. The application will be accessed by users from different geographic regions of the
world. Application users will be able to download and upload unique data up to gigabytes in size. The development team wants a cost-effective
solution to minimize upload and download latency and maximize performance.
C. Use Amazon EC2 with Auto Scaling and Amazon CloudFront to host the application.
D. Use Amazon EC2 with Auto Scaling and Amazon ElastiCache to host the application.
Correct Answer: A
° This means that the content will be served from servers that are closer to the user, which will reduce the amount of time it takes for
the content to be delivered. Distributing content to multiple servers, which can help to handle spikes in traffic
upvoted 2 times
Q: How should I choose between S3 Transfer Acceleration and Amazon CloudFront’s PUT/POST?
S3 Transfer Acceleration optimizes the TCP protocol and adds additional intelligence between the client and the S3 bucket, making S3
Transfer Acceleration a better choice if a higher throughput is desired. If you have objects that are smaller than 1 GB or if the data set is
less than 1 GB in size, you should consider using Amazon CloudFront's PUT/POST commands for optimal performance.
https://aws.amazon.com/s3/faqs/?nc1=h_ls
upvoted 1 times
Transfer Acceleration is a feature of Amazon S3 that utilizes the AWS global infrastructure to accelerate file transfers to and from Amazon
S3. It uses optimized network paths and parallelization techniques to speed up data transfer, especially for large files and over long
distances.
By using Amazon S3 with Transfer Acceleration, the web application can benefit from faster upload and download speeds, reducing
latency and improving overall performance for users in different geographic regions. This solution is cost-effective as it leverages the
existing Amazon S3 infrastructure and eliminates the need for additional compute resources.
upvoted 1 times
Question #444 Topic 1
A company has hired a solutions architect to design a reliable architecture for its application. The application consists of one Amazon RDS DB
instance and two manually provisioned Amazon EC2 instances that run web servers. The EC2 instances are located in a single Availability Zone.
An employee recently deleted the DB instance, and the application was unavailable for 24 hours as a result. The company is concerned with the
overall reliability of its environment.
What should the solutions architect do to maximize reliability of the application's infrastructure?
A. Delete one EC2 instance and enable termination protection on the other EC2 instance. Update the DB instance to be Multi-AZ, and enable
deletion protection.
B. Update the DB instance to be Multi-AZ, and enable deletion protection. Place the EC2 instances behind an Application Load Balancer, and
run them in an EC2 Auto Scaling group across multiple Availability Zones.
C. Create an additional DB instance along with an Amazon API Gateway and an AWS Lambda function. Configure the application to invoke the
Lambda function through API Gateway. Have the Lambda function write the data to the two DB instances.
D. Place the EC2 instances in an EC2 Auto Scaling group that has multiple subnets located in multiple Availability Zones. Use Spot Instances
instead of On-Demand Instances. Set up Amazon CloudWatch alarms to monitor the health of the instances Update the DB instance to be
Multi-AZ, and enable deletion protection.
Correct Answer: B
A company is storing 700 terabytes of data on a large network-attached storage (NAS) system in its corporate data center. The company has a
hybrid environment with a 10 Gbps AWS Direct Connect connection.
After an audit from a regulator, the company has 90 days to move the data to the cloud. The company needs to move the data efficiently and
without disruption. The company still needs to be able to access and update the data during the transfer window.
A. Create an AWS DataSync agent in the corporate data center. Create a data transfer task Start the transfer to an Amazon S3 bucket.
B. Back up the data to AWS Snowball Edge Storage Optimized devices. Ship the devices to an AWS data center. Mount a target Amazon S3
bucket on the on-premises file system.
C. Use rsync to copy the data directly from local storage to a designated Amazon S3 bucket over the Direct Connect connection.
D. Back up the data on tapes. Ship the tapes to an AWS data center. Mount a target Amazon S3 bucket on the on-premises file system.
Correct Answer: A
DataSync allows accessing and updating the data continuously during the transfer process.
upvoted 1 times
A company stores data in PDF format in an Amazon S3 bucket. The company must follow a legal requirement to retain all new and existing data in
Amazon S3 for 7 years.
Which solution will meet these requirements with the LEAST operational overhead?
A. Turn on the S3 Versioning feature for the S3 bucket. Configure S3 Lifecycle to delete the data after 7 years. Configure multi-factor
authentication (MFA) delete for all S3 objects.
B. Turn on S3 Object Lock with governance retention mode for the S3 bucket. Set the retention period to expire after 7 years. Recopy all
existing objects to bring the existing data into compliance.
C. Turn on S3 Object Lock with compliance retention mode for the S3 bucket. Set the retention period to expire after 7 years. Recopy all
existing objects to bring the existing data into compliance.
D. Turn on S3 Object Lock with compliance retention mode for the S3 bucket. Set the retention period to expire after 7 years. Use S3 Batch
Operations to bring the existing data into compliance.
Correct Answer: C
Operational complexity: Option C has a straightforward process of recopying existing objects. It is a well-known operation in S3 and
doesn't require additional setup or management. Option D introduces the need to set up and configure S3 Batch Operations, which can
involve creating job definitions, specifying job parameters, and monitoring the progress of batch operations. This additional complexity
may increase the operational overhead.
upvoted 1 times
A company has a stateless web application that runs on AWS Lambda functions that are invoked by Amazon API Gateway. The company wants to
deploy the application across multiple AWS Regions to provide Regional failover capabilities.
A. Create Amazon Route 53 health checks for each Region. Use an active-active failover configuration.
B. Create an Amazon CloudFront distribution with an origin for each Region. Use CloudFront health checks to route traffic.
C. Create a transit gateway. Attach the transit gateway to the API Gateway endpoint in each Region. Configure the transit gateway to route
requests.
D. Create an Application Load Balancer in the primary Region. Set the target group to point to the API Gateway endpoint hostnames in each
Region.
Correct Answer: A
Option A (creating Amazon Route 53 health checks with an active-active failover configuration) is not suitable for this scenario as it is
primarily used for failover between different endpoints within the same Region, rather than routing traffic to different Regions.
upvoted 1 times
By creating Amazon Route 53 health checks for each Region and configuring an active-active failover configuration, Route 53 can monitor
the health of the endpoints in each Region and route traffic to healthy endpoints. In the event of a failure in one Region, Route 53
automatically routes traffic to the healthy endpoints in other Regions.
This setup ensures high availability and failover capabilities for your web application across multiple AWS Regions.
upvoted 1 times
A company has two VPCs named Management and Production. The Management VPC uses VPNs through a customer gateway to connect to a
single device in the data center. The Production VPC uses a virtual private gateway with two attached AWS Direct Connect connections. The
Management and Production VPCs both use a single VPC peering connection to allow communication between the applications.
What should a solutions architect do to mitigate any single point of failure in this architecture?
B. Add a second virtual private gateway and attach it to the Management VPC.
C. Add a second set of VPNs to the Management VPC from a second customer gateway device.
D. Add a second VPC peering connection between the Management VPC and the Production VPC.
Correct Answer: C
The Management VPC currently has a single VPN connection through one customer gateway device. This is a single point of failure.
Adding a second set of VPN connections from the Management VPC to a second customer gateway device provides redundancy and
eliminates this single point of failure.
upvoted 1 times
To mitigate single points of failure in the architecture, you can consider implementing option C: adding a second set of VPNs to the
Management VPC from a second customer gateway device. This will introduce redundancy at the VPN connection level for the
Management VPC, ensuring that if one customer gateway or VPN connection fails, the other connection can still provide connectivity to
the data center.
upvoted 2 times
A company runs its application on an Oracle database. The company plans to quickly migrate to AWS because of limited resources for the
database, backup administration, and data center maintenance. The application uses third-party database features that require privileged access.
Which solution will help the company migrate the database to AWS MOST cost-effectively?
A. Migrate the database to Amazon RDS for Oracle. Replace third-party features with cloud services.
B. Migrate the database to Amazon RDS Custom for Oracle. Customize the database settings to support third-party features.
C. Migrate the database to an Amazon EC2 Amazon Machine Image (AMI) for Oracle. Customize the database settings to support third-party
features.
D. Migrate the database to Amazon RDS for PostgreSQL by rewriting the application code to remove dependency on Oracle APEX.
Correct Answer: C
A company has a three-tier web application that is in a single server. The company wants to migrate the application to the AWS Cloud. The
company also wants the application to align with the AWS Well-Architected Framework and to be consistent with AWS recommended best
practices for security, scalability, and resiliency.
A. Create a VPC across two Availability Zones with the application's existing architecture. Host the application with existing architecture on an
Amazon EC2 instance in a private subnet in each Availability Zone with EC2 Auto Scaling groups. Secure the EC2 instance with security groups
and network access control lists (network ACLs).
B. Set up security groups and network access control lists (network ACLs) to control access to the database layer. Set up a single Amazon
RDS database in a private subnet.
C. Create a VPC across two Availability Zones. Refactor the application to host the web tier, application tier, and database tier. Host each tier
on its own private subnet with Auto Scaling groups for the web tier and application tier.
D. Use a single Amazon RDS database. Allow database access only from the application tier security group.
E. Use Elastic Load Balancers in front of the web tier. Control access by using security groups containing references to each layer's security
groups.
F. Use an Amazon RDS database Multi-AZ cluster deployment in private subnets. Allow database access only from application tier security
groups.
A company is migrating its applications and databases to the AWS Cloud. The company will use Amazon Elastic Container Service (Amazon ECS),
AWS Direct Connect, and Amazon RDS.
Which activities will be managed by the company's operational team? (Choose three.)
A. Management of the Amazon RDS infrastructure layer, operating system, and platforms
B. Creation of an Amazon RDS DB instance and configuring the scheduled maintenance window
C. Configuration of additional software components on Amazon ECS for monitoring, patch management, log management, and host intrusion
detection
D. Installation of patches for all minor and major database versions for Amazon RDS
E. Ensure the physical security of the Amazon RDS infrastructure in the data center
RDS --> Excluse A (by keyword "infrastructure layer"), Choose B. Exclusive D (by keyword "patches for all minor and major database
versions for Amazon RDS"). Exclusive E (by keyword "Ensure the physical security of the Amazon RDS"). Easy question.
upvoted 1 times
A company runs a Java-based job on an Amazon EC2 instance. The job runs every hour and takes 10 seconds to run. The job runs on a scheduled
interval and consumes 1 GB of memory. The CPU utilization of the instance is low except for short surges during which the job uses the maximum
CPU available. The company wants to optimize the costs to run the job.
A. Use AWS App2Container (A2C) to containerize the job. Run the job as an Amazon Elastic Container Service (Amazon ECS) task on AWS
Fargate with 0.5 virtual CPU (vCPU) and 1 GB of memory.
B. Copy the code into an AWS Lambda function that has 1 GB of memory. Create an Amazon EventBridge scheduled rule to run the code each
hour.
C. Use AWS App2Container (A2C) to containerize the job. Install the container in the existing Amazon Machine Image (AMI). Ensure that the
schedule stops the container when the task finishes.
D. Configure the existing schedule to stop the EC2 instance at the completion of the job and restart the EC2 instance when the next job starts.
Correct Answer: B
By using Amazon EventBridge, you can create a scheduled rule to trigger the Lambda function every hour, ensuring that the job runs on
the desired interval.
upvoted 1 times
A company wants to implement a backup strategy for Amazon EC2 data and multiple Amazon S3 buckets. Because of regulatory requirements, the
company must retain backup files for a specific time period. The company must not alter the files for the duration of the retention period.
A. Use AWS Backup to create a backup vault that has a vault lock in governance mode. Create the required backup plan.
B. Use Amazon Data Lifecycle Manager to create the required automated snapshot policy.
C. Use Amazon S3 File Gateway to create the backup. Configure the appropriate S3 Lifecycle management.
D. Use AWS Backup to create a backup vault that has a vault lock in compliance mode. Create the required backup plan.
Correct Answer: A
A company has resources across multiple AWS Regions and accounts. A newly hired solutions architect discovers a previous employee did not
provide details about the resources inventory. The solutions architect needs to build and map the relationship details of the various workloads
across all accounts.
Which solution will meet these requirements in the MOST operationally efficient way?
A. Use AWS Systems Manager Inventory to generate a map view from the detailed view report.
B. Use AWS Step Functions to collect workload details. Build architecture diagrams of the workloads manually.
D. Use AWS X-Ray to view the workload details. Build architecture diagrams with relationships.
Correct Answer: A
To efficiently build and map the relationship details of various workloads across multiple AWS Regions and accounts, you can use the AWS
Systems Manager Inventory feature in combination with AWS Resource Groups. Here's a solution that can help you achieve this:
AWS Systems Manager Inventory:
upvoted 1 times
nosense 4 months, 2 weeks ago
Selected Answer: C
only c mapping relationships
upvoted 1 times
Question #455 Topic 1
A company uses AWS Organizations. The company wants to operate some of its AWS accounts with different budgets. The company wants to
receive alerts and automatically prevent provisioning of additional resources on AWS accounts when the allocated budget threshold is met during
a specific period.
A. Use AWS Budgets to create a budget. Set the budget amount under the Cost and Usage Reports section of the required AWS accounts.
B. Use AWS Budgets to create a budget. Set the budget amount under the Billing dashboards of the required AWS accounts.
C. Create an IAM user for AWS Budgets to run budget actions with the required permissions.
D. Create an IAM role for AWS Budgets to run budget actions with the required permissions.
E. Add an alert to notify the company when each account meets its budget threshold. Add a budget action that selects the IAM identity
created with the appropriate config rule to prevent provisioning of additional resources.
F. Add an alert to notify the company when each account meets its budget threshold. Add a budget action that selects the IAM identity created
with the appropriate service control policy (SCP) to prevent provisioning of additional resources.
A company runs applications on Amazon EC2 instances in one AWS Region. The company wants to back up the EC2 instances to a second
Region. The company also wants to provision EC2 resources in the second Region and manage the EC2 instances centrally from one AWS
account.
A. Create a disaster recovery (DR) plan that has a similar number of EC2 instances in the second Region. Configure data replication.
B. Create point-in-time Amazon Elastic Block Store (Amazon EBS) snapshots of the EC2 instances. Copy the snapshots to the second Region
periodically.
C. Create a backup plan by using AWS Backup. Configure cross-Region backup to the second Region for the EC2 instances.
D. Deploy a similar number of EC2 instances in the second Region. Use AWS DataSync to transfer the data from the source Region to the
second Region.
Correct Answer: C
AWS Backup provides automated backups across Regions for EC2 instances. This handles the backup requirement.
AWS Backup is more cost-effective for cross-Region EC2 backups than using EBS snapshots manually or DataSync.
upvoted 2 times
A company that uses AWS is building an application to transfer data to a product manufacturer. The company has its own identity provider (IdP).
The company wants the IdP to authenticate application users while the users use the application to transfer data. The company must use
Applicability Statement 2 (AS2) protocol.
A. Use AWS DataSync to transfer the data. Create an AWS Lambda function for IdP authentication.
B. Use Amazon AppFlow flows to transfer the data. Create an Amazon Elastic Container Service (Amazon ECS) task for IdP authentication.
C. Use AWS Transfer Family to transfer the data. Create an AWS Lambda function for IdP authentication.
D. Use AWS Storage Gateway to transfer the data. Create an Amazon Cognito identity pool for IdP authentication.
Correct Answer: C
To authenticate your users, you can use your existing identity provider with AWS Transfer Family. You integrate your identity provider
using an AWS Lambda function, which authenticates and authorizes your users for access to Amazon S3 or Amazon Elastic File System
(Amazon EFS).
https://docs.aws.amazon.com/transfer/latest/userguide/custom-identity-provider-users.html
upvoted 1 times
By using AWS Storage Gateway, you can set up a gateway that supports the AS2 protocol for data transfer. Additionally, you can configure
authentication using an Amazon Cognito identity pool. Amazon Cognito provides a comprehensive authentication and user management
service that integrates with various identity providers, including your own IdP.
Therefore, Option D is the correct solution as it leverages AWS Storage Gateway for AS2 data transfer and allows authentication using an
Amazon Cognito identity pool integrated with the company's IdP.
upvoted 1 times
AWS Transfer Family does not currently support the AS2 protocol. AS2 is a specific protocol used for secure and reliable data transfer, often
used in business-to-business (B2B) scenarios. In this case, option C, which suggests using AWS Transfer Family, would not meet the
requirement of using the AS2 protocol.
upvoted 2 times
To meet the requirements of using an identity provider (IdP) for user authentication and the AS2 protocol for data transfer, you can
implement the following solution:
AWS Transfer Family: Use AWS Transfer Family, specifically AWS Transfer for SFTP or FTPS, to handle the data transfer using the AS2
protocol. AWS Transfer for SFTP and FTPS provide fully managed, highly available SFTP and FTPS servers in the AWS Cloud.
The Lambda authorizer authenticates the token with the third-party identity provider.
upvoted 1 times
Both options D and C are valid solutions for the given requirements. The choice between them would depend on additional factors
such as specific preferences, existing infrastructure, and overall architectural considerations.
upvoted 2 times
Question #458 Topic 1
A solutions architect is designing a RESTAPI in Amazon API Gateway for a cash payback service. The application requires 1 GB of memory and 2
GB of storage for its computation resources. The application will require that the data is in a relational format.
Which additional combination ofAWS services will meet these requirements with the LEAST administrative effort? (Choose two.)
A. Amazon EC2
B. AWS Lambda
C. Amazon RDS
D. Amazon DynamoDB
Correct Answer: BC
A company uses AWS Organizations to run workloads within multiple AWS accounts. A tagging policy adds department tags to AWS resources
when the company creates tags.
An accounting team needs to determine spending on Amazon EC2 consumption. The accounting team must determine which departments are
responsible for the costs regardless ofAWS account. The accounting team has access to AWS Cost Explorer for all AWS accounts within the
organization and needs to access all reports from Cost Explorer.
Which solution meets these requirements in the MOST operationally efficient way?
A. From the Organizations management account billing console, activate a user-defined cost allocation tag named department. Create one
cost report in Cost Explorer grouping by tag name, and filter by EC2.
B. From the Organizations management account billing console, activate an AWS-defined cost allocation tag named department. Create one
cost report in Cost Explorer grouping by tag name, and filter by EC2.
C. From the Organizations member account billing console, activate a user-defined cost allocation tag named department. Create one cost
report in Cost Explorer grouping by the tag name, and filter by EC2.
D. From the Organizations member account billing console, activate an AWS-defined cost allocation tag named department. Create one cost
report in Cost Explorer grouping by tag name, and filter by EC2.
Correct Answer: C
A company wants to securely exchange data between its software as a service (SaaS) application Salesforce account and Amazon S3. The
company must encrypt the data at rest by using AWS Key Management Service (AWS KMS) customer managed keys (CMKs). The company must
also encrypt the data in transit. The company has enabled API access for the Salesforce account.
A. Create AWS Lambda functions to transfer the data securely from Salesforce to Amazon S3.
B. Create an AWS Step Functions workflow. Define the task to transfer the data securely from Salesforce to Amazon S3.
C. Create Amazon AppFlow flows to transfer the data securely from Salesforce to Amazon S3.
D. Create a custom connector for Salesforce to transfer the data securely from Salesforce to Amazon S3.
Correct Answer: C
A company is developing a mobile gaming app in a single AWS Region. The app runs on multiple Amazon EC2 instances in an Auto Scaling group.
The company stores the app data in Amazon DynamoDB. The app communicates by using TCP traffic and UDP traffic between the users and the
servers. The application will be used globally. The company wants to ensure the lowest possible latency for all users.
A. Use AWS Global Accelerator to create an accelerator. Create an Application Load Balancer (ALB) behind an accelerator endpoint that uses
Global Accelerator integration and listening on the TCP and UDP ports. Update the Auto Scaling group to register instances on the ALB.
B. Use AWS Global Accelerator to create an accelerator. Create a Network Load Balancer (NLB) behind an accelerator endpoint that uses
Global Accelerator integration and listening on the TCP and UDP ports. Update the Auto Scaling group to register instances on the NLB.
C. Create an Amazon CloudFront content delivery network (CDN) endpoint. Create a Network Load Balancer (NLB) behind the endpoint and
listening on the TCP and UDP ports. Update the Auto Scaling group to register instances on the NLB. Update CloudFront to use the NLB as the
origin.
D. Create an Amazon CloudFront content delivery network (CDN) endpoint. Create an Application Load Balancer (ALB) behind the endpoint
and listening on the TCP and UDP ports. Update the Auto Scaling group to register instances on the ALB. Update CloudFront to use the ALB as
the origin.
Correct Answer: A
A company has an application that processes customer orders. The company hosts the application on an Amazon EC2 instance that saves the
orders to an Amazon Aurora database. Occasionally when traffic is high the workload does not process orders fast enough.
What should a solutions architect do to write the orders reliably to the database as quickly as possible?
A. Increase the instance size of the EC2 instance when traffic is high. Write orders to Amazon Simple Notification Service (Amazon SNS).
Subscribe the database endpoint to the SNS topic.
B. Write orders to an Amazon Simple Queue Service (Amazon SQS) queue. Use EC2 instances in an Auto Scaling group behind an Application
Load Balancer to read from the SQS queue and process orders into the database.
C. Write orders to Amazon Simple Notification Service (Amazon SNS). Subscribe the database endpoint to the SNS topic. Use EC2 instances in
an Auto Scaling group behind an Application Load Balancer to read from the SNS topic.
D. Write orders to an Amazon Simple Queue Service (Amazon SQS) queue when the EC2 instance reaches CPU threshold limits. Use scheduled
scaling of EC2 instances in an Auto Scaling group behind an Application Load Balancer to read from the SQS queue and process orders into
the database.
Correct Answer: B
An IoT company is releasing a mattress that has sensors to collect data about a user’s sleep. The sensors will send data to an Amazon S3 bucket.
The sensors collect approximately 2 MB of data every night for each mattress. The company must process and summarize the data for each
mattress. The results need to be available as soon as possible. Data processing will require 1 GB of memory and will finish within 30 seconds.
Correct Answer: C
Note: Lambda allocates CPU power in proportion to the amount of memory configured. You can increase or decrease the memory
and CPU power allocated to your function using the Memory (MB) setting. At 1,769 MB, a function has the equivalent of one vCPU.
4 KB, for all environment variables associated with the function, in aggregate
https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html
upvoted 1 times
A company hosts an online shopping application that stores all orders in an Amazon RDS for PostgreSQL Single-AZ DB instance. Management
wants to eliminate single points of failure and has asked a solutions architect to recommend an approach to minimize database downtime without
requiring any changes to the application code.
A. Convert the existing database instance to a Multi-AZ deployment by modifying the database instance and specifying the Multi-AZ option.
B. Create a new RDS Multi-AZ deployment. Take a snapshot of the current RDS instance and restore the new Multi-AZ deployment with the
snapshot.
C. Create a read-only replica of the PostgreSQL database in another Availability Zone. Use Amazon Route 53 weighted record sets to distribute
requests across the databases.
D. Place the RDS for PostgreSQL database in an Amazon EC2 Auto Scaling group with a minimum group size of two. Use Amazon Route 53
weighted record sets to distribute requests across instances.
Correct Answer: A
Overall, option A offers a cost-effective and efficient way to minimize database downtime without requiring significant changes or
additional complexities.
upvoted 2 times
A company is developing an application to support customer demands. The company wants to deploy the application on multiple Amazon EC2
Nitro-based instances within the same Availability Zone. The company also wants to give the application the ability to write to multiple block
storage volumes in multiple EC2 Nitro-based instances simultaneously to achieve higher application availability.
A. Use General Purpose SSD (gp3) EBS volumes with Amazon Elastic Block Store (Amazon EBS) Multi-Attach
B. Use Throughput Optimized HDD (st1) EBS volumes with Amazon Elastic Block Store (Amazon EBS) Multi-Attach
C. Use Provisioned IOPS SSD (io2) EBS volumes with Amazon Elastic Block Store (Amazon EBS) Multi-Attach
D. Use General Purpose SSD (gp2) EBS volumes with Amazon Elastic Block Store (Amazon EBS) Multi-Attach
Correct Answer: C
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volumes-
multi.html#:~:text=Multi%2DAttach%20is%20supported%20exclusively%20on%20Provisioned%20IOPS%20SSD%20(io1%20and%20io2)%2
0volumes.
upvoted 1 times
Multi-Attach enabled volumes can be attached to up to 16 instances built on the Nitro System that are in the same Availability Zone.
Multi-Attach is supported exclusively on Provisioned IOPS SSD (io1 or io2) volumes.
upvoted 2 times
While both option C and option D can support Amazon EBS Multi-Attach, using Provisioned IOPS SSD (io2) EBS volumes provides higher
performance and lower latency compared to General Purpose SSD (gp2) volumes. This makes io2 volumes better suited for demanding
and mission-critical applications where performance is crucial.
If the goal is to achieve higher application availability and ensure optimal performance, using Provisioned IOPS SSD (io2) EBS volumes with
Multi-Attach will provide the best results.
upvoted 1 times
plus fyi, gp3 is the one gives a good balance of performance and cost. so gp2 is wrong in every way
upvoted 1 times
A company designed a stateless two-tier application that uses Amazon EC2 in a single Availability Zone and an Amazon RDS Multi-AZ DB
instance. New company management wants to ensure the application is highly available.
A. Configure the application to use Multi-AZ EC2 Auto Scaling and create an Application Load Balancer
B. Configure the application to take snapshots of the EC2 instances and send them to a different AWS Region
C. Configure the application to use Amazon Route 53 latency-based routing to feed requests to the application
D. Configure Amazon Route 53 rules to handle incoming requests and create a Multi-AZ Application Load Balancer
Correct Answer: A
A company uses AWS Organizations. A member account has purchased a Compute Savings Plan. Because of changes in the workloads inside the
member account, the account no longer receives the full benefit of the Compute Savings Plan commitment. The company uses less than 50% of
its purchased compute power.
A. Turn on discount sharing from the Billing Preferences section of the account console in the member account that purchased the Compute
Savings Plan.
B. Turn on discount sharing from the Billing Preferences section of the account console in the company's Organizations management account.
C. Migrate additional compute workloads from another AWS account to the account that has the Compute Savings Plan.
D. Sell the excess Savings Plan commitment in the Reserved Instance Marketplace.
Correct Answer: B
Sign in to the AWS Management Console and open the AWS Billing console at https://console.aws.amazon.com/billing/
.
Note
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ri-market-general.html
upvoted 1 times
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/ri-turn-
off.html#:~:text=choose%20Save.-,Turning%20on%20shared%20reserved%20instances%20and%20Savings%20Plans%20discounts,-
You%20can%20use
upvoted 1 times
A company is developing a microservices application that will provide a search catalog for customers. The company must use REST APIs to
present the frontend of the application to users. The REST APIs must access the backend services that the company hosts in containers in private
VPC subnets.
A. Design a WebSocket API by using Amazon API Gateway. Host the application in Amazon Elastic Container Service (Amazon ECS) in a
private subnet. Create a private VPC link for API Gateway to access Amazon ECS.
B. Design a REST API by using Amazon API Gateway. Host the application in Amazon Elastic Container Service (Amazon ECS) in a private
subnet. Create a private VPC link for API Gateway to access Amazon ECS.
C. Design a WebSocket API by using Amazon API Gateway. Host the application in Amazon Elastic Container Service (Amazon ECS) in a
private subnet. Create a security group for API Gateway to access Amazon ECS.
D. Design a REST API by using Amazon API Gateway. Host the application in Amazon Elastic Container Service (Amazon ECS) in a private
subnet. Create a security group for API Gateway to access Amazon ECS.
Correct Answer: B
Amazon ECS in a Private Subnet: Hosting the application in Amazon ECS in a private subnet ensures that the containers are securely
deployed within the VPC and not directly exposed to the public internet.
Private VPC Link: To enable the REST API in API Gateway to access the backend services hosted in Amazon ECS, you can create a private
VPC link. This establishes a private network connection between the API Gateway and ECS containers, allowing secure communication
without traversing the public internet.
upvoted 4 times
nosense 4 months, 2 weeks ago
Selected Answer: B
b is right, bcs vpc link provided security connection
upvoted 2 times
A company stores raw collected data in an Amazon S3 bucket. The data is used for several types of analytics on behalf of the company's
customers. The type of analytics requested determines the access pattern on the S3 objects.
The company cannot predict or control the access pattern. The company wants to reduce its S3 costs.
A. Use S3 replication to transition infrequently accessed objects to S3 Standard-Infrequent Access (S3 Standard-IA)
B. Use S3 Lifecycle rules to transition objects from S3 Standard to Standard-Infrequent Access (S3 Standard-IA)
D. Use S3 Inventory to identify and transition objects that have not been accessed from S3 Standard to S3 Intelligent-Tiering
Correct Answer: C
A company has applications hosted on Amazon EC2 instances with IPv6 addresses. The applications must initiate communications with other
external applications using the internet. However the company’s security policy states that any external service cannot initiate a connection to the
EC2 instances.
A. Create a NAT gateway and make it the destination of the subnet's route table
B. Create an internet gateway and make it the destination of the subnet's route table
C. Create a virtual private gateway and make it the destination of the subnet's route table
D. Create an egress-only internet gateway and make it the destination of the subnet's route table
Correct Answer: D
A company is creating an application that runs on containers in a VPC. The application stores and accesses data in an Amazon S3 bucket. During
the development phase, the application will store and access 1 TB of data in Amazon S3 each day. The company wants to minimize costs and
wants to prevent traffic from traversing the internet whenever possible.
C. Create a gateway VPC endpoint for Amazon S3. Associate this endpoint with all route tables in the VPC
D. Create an interface endpoint for Amazon S3 in the VPC. Associate this endpoint with all route tables in the VPC
Correct Answer: C
Minimize Internet Traffic: By creating a gateway VPC endpoint for Amazon S3 and associating it with all route tables in the VPC, the traffic
between the VPC and Amazon S3 will be kept within the AWS network. This helps in minimizing data transfer costs and prevents the need
for traffic to traverse the internet.
Cost-Effective: With a gateway VPC endpoint, the data transfer between the application running in the VPC and the S3 bucket stays within
the AWS network, reducing the need for data transfer across the internet. This can result in cost savings, especially when dealing with
large amounts of data.
upvoted 4 times
A company has a mobile chat application with a data store based in Amazon DynamoDB. Users would like new messages to be read with as little
latency as possible. A solutions architect needs to design an optimal solution that requires minimal application changes.
A. Configure Amazon DynamoDB Accelerator (DAX) for the new messages table. Update the code to use the DAX endpoint.
B. Add DynamoDB read replicas to handle the increased read load. Update the application to point to the read endpoint for the read replicas.
C. Double the number of read capacity units for the new messages table in DynamoDB. Continue to use the existing DynamoDB endpoint.
D. Add an Amazon ElastiCache for Redis cache to the application stack. Update the application to point to the Redis cache endpoint instead of
DynamoDB.
Correct Answer: A
Minimal Application Changes: With DAX, the application code can be updated to use the DAX endpoint instead of the standard DynamoDB
endpoint. This change is relatively minimal and does not require extensive modifications to the application's data access logic.
Low Latency: DAX caches frequently accessed data in memory, allowing subsequent read requests for the same data to be served with
minimal latency. This ensures that new messages can be read by users with minimal delay.
upvoted 2 times
A company hosts a website on Amazon EC2 instances behind an Application Load Balancer (ALB). The website serves static content. Website
traffic is increasing, and the company is concerned about a potential increase in cost.
B. Create an Amazon ElastiCache cluster. Connect the ALB to the ElastiCache cluster to serve cached files
C. Create an AWS WAF web ACL and associate it with the ALB. Add a rule to the web ACL to cache static files
D. Create a second ALB in an alternative AWS Region. Route user traffic to the closest Region to minimize data transfer costs
Correct Answer: A
Caching Static Files: Since the website serves static content, caching these files at CloudFront edge locations can significantly reduce the
number of requests forwarded to the EC2 instances. This helps to lower the overall cost by offloading traffic from the instances and
reducing the data transfer costs.
upvoted 3 times
A company has multiple VPCs across AWS Regions to support and run workloads that are isolated from workloads in other Regions. Because of a
recent application launch requirement, the company’s VPCs must communicate with all other VPCs across all Regions.
Which solution will meet these requirements with the LEAST amount of administrative effort?
A. Use VPC peering to manage VPC communication in a single Region. Use VPC peering across Regions to manage VPC communications.
B. Use AWS Direct Connect gateways across all Regions to connect VPCs across regions and manage VPC communications.
C. Use AWS Transit Gateway to manage VPC communication in a single Region and Transit Gateway peering across Regions to manage VPC
communications.
D. Use AWS PrivateLink across all Regions to connect VPCs across Regions and manage VPC communications
Correct Answer: C
AWS Transit Gateway is a network hub that you can use to connect your VPCs and on-premises networks. It provides a single point of
control for managing your network traffic, and it can help you to reduce the number of connections that you need to manage.
Transit Gateway peering allows you to connect two Transit Gateways in different Regions. This can help you to create a global network that
spans multiple Regions.
To use Transit Gateway to manage VPC communication in a single Region, you would create a Transit Gateway in each Region. You would
then attach your VPCs to the Transit Gateway.
To use Transit Gateway peering to manage VPC communication across Regions, you would create a Transit Gateway peering connection
between the Transit Gateways in each Region.
upvoted 7 times
Transit Gateway Peering: Transit Gateway supports peering connections across AWS Regions, allowing you to establish connectivity
between VPCs in different Regions without the need for complex VPC peering configurations. This simplifies the management of VPC
communications across Regions.
upvoted 4 times
Question #475 Topic 1
A company is designing a containerized application that will use Amazon Elastic Container Service (Amazon ECS). The application needs to
access a shared file system that is highly durable and can recover data to another AWS Region with a recovery point objective (RPO) of 8 hours.
The file system needs to provide a mount target m each Availability Zone within a Region.
A solutions architect wants to use AWS Backup to manage the replication to another Region.
C. Amazon Elastic File System (Amazon EFS) with the Standard storage class
Correct Answer: C
https://aws.amazon.com/efs/faq/#:~:text=What%20is%20Amazon%20EFS%20Replication%3F
https://aws.amazon.com/fsx/netapp-
ontap/faqs/#:~:text=How%20do%20I%20configure%20cross%2Dregion%20replication%20for%20the%20data%20in%20my%20file%20syst
em%3F
upvoted 1 times
AWS Backup can manage replication of EFS to another region as mentioned below
https://docs.aws.amazon.com/efs/latest/ug/awsbackup.html
upvoted 1 times
During a disaster or fault within an AZ affecting all copies of your data, you might experience loss of data that has not been replicated
using Amazon EFS Replication. EFS Replication is designed to meet a recovery point objective (RPO) and recovery time objective (RTO) of
minutes. You can use AWS Backup to store additional copies of your file system data and restore them to a new file system in an AZ or
Region of your choice. Amazon EFS file system backup data created and managed by AWS Backup is replicated to three AZs and is
designed for 99.999999999% (11 nines) durability.
upvoted 1 times
A company is expecting rapid growth in the near future. A solutions architect needs to configure existing users and grant permissions to new
users on AWS. The solutions architect has decided to create IAM groups. The solutions architect will add the new users to IAM groups based on
department.
Which additional action is the MOST secure way to grant permissions to the new users?
B. Create IAM roles that have least privilege permission. Attach the roles to the IAM groups
C. Create an IAM policy that grants least privilege permission. Attach the policy to the IAM groups
D. Create IAM roles. Associate the roles with a permissions boundary that defines the maximum permissions
Correct Answer: C
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups_manage_attach-policy.html
A group requires permissions to list an Amazon S3 bucket and delete objects from that bucket. An administrator has created the following IAM
policy to provide access to the bucket and applied that policy to the group. The group is not able to delete objects in the bucket. The company
follows least-privilege access rules.
Which statement should a solutions architect add to the policy to correct bucket access?
A.
B.
C.
D.
Correct Answer: C
A law firm needs to share information with the public. The information includes hundreds of files that must be publicly readable. Modifications or
deletions of the files by anyone before a designated future date are prohibited.
Which solution will meet these requirements in the MOST secure way?
A. Upload all files to an Amazon S3 bucket that is configured for static website hosting. Grant read-only IAM permissions to any AWS
principals that access the S3 bucket until the designated date.
B. Create a new Amazon S3 bucket with S3 Versioning enabled. Use S3 Object Lock with a retention period in accordance with the designated
date. Configure the S3 bucket for static website hosting. Set an S3 bucket policy to allow read-only access to the objects.
C. Create a new Amazon S3 bucket with S3 Versioning enabled. Configure an event trigger to run an AWS Lambda function in case of object
modification or deletion. Configure the Lambda function to replace the objects with the original versions from a private S3 bucket.
D. Upload all files to an Amazon S3 bucket that is configured for static website hosting. Select the folder that contains the files. Use S3 Object
Lock with a retention period in accordance with the designated date. Grant read-only IAM permissions to any AWS principals that access the
S3 bucket.
Correct Answer: B
A company is making a prototype of the infrastructure for its new website by manually provisioning the necessary infrastructure. This
infrastructure includes an Auto Scaling group, an Application Load Balancer and an Amazon RDS database. After the configuration has been
thoroughly validated, the company wants the capability to immediately deploy the infrastructure for development and production use in two
Availability Zones in an automated fashion.
A. Use AWS Systems Manager to replicate and provision the prototype infrastructure in two Availability Zones
B. Define the infrastructure as a template by using the prototype infrastructure as a guide. Deploy the infrastructure with AWS CloudFormation.
C. Use AWS Config to record the inventory of resources that are used in the prototype infrastructure. Use AWS Config to deploy the prototype
infrastructure into two Availability Zones.
D. Use AWS Elastic Beanstalk and configure it to use an automated reference to the prototype infrastructure to automatically deploy new
environments in two Availability Zones.
Correct Answer: B
In this case, the solutions architect should define the infrastructure as a template by using the prototype infrastructure as a guide. The
template should include resources for an Auto Scaling group, an Application Load Balancer, and an Amazon RDS database. Once the
template is created, the solutions architect can use CloudFormation to deploy the infrastructure in two Availability Zones.
upvoted 1 times
A business application is hosted on Amazon EC2 and uses Amazon S3 for encrypted object storage. The chief information security officer has
directed that no application traffic between the two services should traverse the public internet.
Which capability should the solutions architect use to meet the compliance requirements?
B. VPC endpoint
C. Private subnet
Correct Answer: B
By creating a VPC endpoint for Amazon S3, the traffic between your EC2 instances and S3 will stay within the AWS network and won't
traverse the public internet. This provides a more secure and compliant solution, as the data transfer remains within the private network
boundaries.
upvoted 4 times
Question #481 Topic 1
A company hosts a three-tier web application in the AWS Cloud. A Multi-AZAmazon RDS for MySQL server forms the database layer Amazon
ElastiCache forms the cache layer. The company wants a caching strategy that adds or updates data in the cache when a customer adds an item
to the database. The data in the cache must always match the data in the database.
Correct Answer: B
Adding TTL (Time-to-Live) caching strategy (option C) involves setting an expiration time for cached data. It is useful for scenarios
where the data can be considered valid for a specific period, but it does not guarantee that the data in the cache is always in sync with
the database.
AWS AppConfig caching strategy (option D) is a service that helps you deploy and manage application configurations. It is not
specifically designed for caching data synchronization between a database and cache layer.
upvoted 10 times
A company wants to migrate 100 GB of historical data from an on-premises location to an Amazon S3 bucket. The company has a 100 megabits
per second (Mbps) internet connection on premises. The company needs to encrypt the data in transit to the S3 bucket. The company will store
new data directly in Amazon S3.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use the s3 sync command in the AWS CLI to move the data directly to an S3 bucket
B. Use AWS DataSync to migrate the data from the on-premises location to an S3 bucket
D. Set up an IPsec VPN from the on-premises location to AWS. Use the s3 cp command in the AWS CLI to move the data directly to an S3
bucket
Correct Answer: B
By using AWS DataSync, the company can easily migrate the 100 GB of historical data from their on-premises location to an S3 bucket.
DataSync will handle the encryption of data in transit and ensure secure transfer.
upvoted 5 times
Option A, using the s3 sync command in the AWS CLI to move the data directly to an S3 bucket, would require more operational overhead
as the company would need to manage the encryption of data in transit themselves. Option D, setting up an IPsec VPN from the on-
premises location to AWS, would also require more operational overhead and would be overkill for this scenario. Option C, using AWS
Snowball, could work but would require more time and resources to order and set up the physical device.
upvoted 4 times
A company containerized a Windows job that runs on .NET 6 Framework under a Windows container. The company wants to run this job in the
AWS Cloud. The job runs every 10 minutes. The job’s runtime varies between 1 minute and 3 minutes.
A. Create an AWS Lambda function based on the container image of the job. Configure Amazon EventBridge to invoke the function every 10
minutes.
B. Use AWS Batch to create a job that uses AWS Fargate resources. Configure the job scheduling to run every 10 minutes.
C. Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate to run the job. Create a scheduled task based on the container image
of the job to run every 10 minutes.
D. Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate to run the job. Create a standalone task based on the container
image of the job. Use Windows task scheduler to run the job every
10 minutes.
Correct Answer: A
https://docs.aws.amazon.com/lambda/latest/dg/images-create.html
upvoted 2 times
Using Amazon ECS scheduled tasks on Fargate eliminates the need to provision EC2 resources. You pay only for the duration the task runs.
Scheduled tasks handle scheduling the jobs and scaling resources automatically. This is lower cost than managing your own scaling via
Lambda or Batch.
ECS also supports Windows containers natively unlike Lambda (option A).
Option D still requires provisioning and paying for full time EC2 resources to run a task scheduler even when tasks are not running.
upvoted 1 times
cd93 1 month, 1 week ago
August 2023, AWS Batch now support Windows container
https://docs.aws.amazon.com/batch/latest/userguide/fargate.html#when-to-use-fargate
upvoted 1 times
A company wants to move from many standalone AWS accounts to a consolidated, multi-account architecture. The company plans to create many
new AWS accounts for different business units. The company needs to authenticate access to these AWS accounts by using a centralized
corporate directory service.
Which combination of actions should a solutions architect recommend to meet these requirements? (Choose two.)
A. Create a new organization in AWS Organizations with all features turned on. Create the new AWS accounts in the organization.
B. Set up an Amazon Cognito identity pool. Configure AWS IAM Identity Center (AWS Single Sign-On) to accept Amazon Cognito
authentication.
C. Configure a service control policy (SCP) to manage the AWS accounts. Add AWS IAM Identity Center (AWS Single Sign-On) to AWS Directory
Service.
D. Create a new organization in AWS Organizations. Configure the organization's authentication mechanism to use AWS Directory Service
directly.
E. Set up AWS IAM Identity Center (AWS Single Sign-On) in the organization. Configure IAM Identity Center, and integrate it with the company's
corporate directory service.
Correct Answer: AE
E. Setting up AWS IAM Identity Center (AWS Single Sign-On) within the organization enables you to integrate it with the company's
corporate directory service. This integration allows for centralized authentication, where users can sign in using their corporate
credentials and access the AWS accounts within the organization.
Together, these actions create a centralized, multi-account architecture that leverages AWS Organizations for account management and
AWS IAM Identity Center (AWS Single Sign-On) for authentication and access control.
upvoted 6 times
E) Integrating AWS IAM Identity Center (AWS SSO) with the company's corporate directory enables federated single sign-on. Users can log
in once to access accounts and resources across AWS.
Together, Organizations and IAM Identity Center provide consolidated management and authentication for multiple accounts using
existing corporate credentials.
upvoted 1 times
https://aws.amazon.com/iam/identity-
center/#:~:text=AWS%20IAM%20Identity%20Center%20(successor%20to%20AWS%20Single%20Sign%2DOn)%20helps%20you%20securely
%20create%20or%20connect%20your%20workforce%20identities%20and%20manage%20their%20access%20centrally%20across%20AWS
%20accounts%20and%20applications.
upvoted 1 times
nosense 4 months, 2 weeks ago
ae is right
upvoted 1 times
Question #485 Topic 1
A company is looking for a solution that can store video archives in AWS from old news footage. The company needs to minimize costs and will
rarely need to restore these files. When the files are needed, they must be available in a maximum of five minutes.
A. Store the video archives in Amazon S3 Glacier and use Expedited retrievals.
B. Store the video archives in Amazon S3 Glacier and use Standard retrievals.
D. Store the video archives in Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA).
Correct Answer: C
A company is building a three-tier application on AWS. The presentation tier will serve a static website The logic tier is a containerized application.
This application will store data in a relational database. The company wants to simplify deployment and to reduce operational costs.
A. Use Amazon S3 to host static content. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate for compute power. Use a
managed Amazon RDS cluster for the database.
B. Use Amazon CloudFront to host static content. Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 for compute power.
Use a managed Amazon RDS cluster for the database.
C. Use Amazon S3 to host static content. Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate for compute power. Use a
managed Amazon RDS cluster for the database.
D. Use Amazon EC2 Reserved Instances to host static content. Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 for
compute power. Use a managed Amazon RDS cluster for the database.
Correct Answer: A
Amazon ECS with AWS Fargate eliminates the need to manage the underlying infrastructure. It allows you to run containerized
applications without provisioning or managing EC2 instances. This reduces operational overhead and provides scalability.
By using a managed Amazon RDS cluster for the database, you can offload the management tasks such as backups, patching, and
monitoring to AWS. This reduces the operational burden and ensures high availability and durability of the database.
upvoted 4 times
Question #487 Topic 1
A company seeks a storage solution for its application. The solution must be highly available and scalable. The solution also must function as a
file system be mountable by multiple Linux instances in AWS and on premises through native protocols, and have no minimum size requirements.
The company has set up a Site-to-Site VPN for access from its on-premises network to its VPC.
C. Amazon Elastic File System (Amazon EFS) with multiple mount targets
D. Amazon Elastic File System (Amazon EFS) with a single mount target and multiple access points
Correct Answer: C
A. Amazon FSx Multi-AZ deployments Amazon FSx is a managed file system service that provides access to file systems that are hosted on
Amazon EC2 instances. Amazon FSx does not support native protocols, such as NFS.
B. Amazon Elastic Block Store (Amazon EBS) Multi-Attach volumes Amazon EBS is a block storage service that provides durable, block-level
storage volumes for use with Amazon EC2 instances. Amazon EBS Multi-Attach volumes can be attached to multiple EC2 instances at the
same time, but they cannot be mounted by multiple Linux instances through native protocols, such as NFS.
D. Amazon Elastic File System (Amazon EFS) with a single mount target and multiple access points A single mount target can only be used
to mount the file system on a single EC2 instance. Multiple access points are used to provide access to the file system from different VPCs.
upvoted 5 times
A 4-year-old media company is using the AWS Organizations all features feature set to organize its AWS accounts. According to the company's
finance team, the billing information on the member accounts must not be accessible to anyone, including the root user of the member accounts.
A. Add all finance team users to an IAM group. Attach an AWS managed policy named Billing to the group.
B. Attach an identity-based policy to deny access to the billing information to all users, including the root user.
C. Create a service control policy (SCP) to deny access to the billing information. Attach the SCP to the root organizational unit (OU).
D. Convert from the Organizations all features feature set to the Organizations consolidated billing feature set.
Correct Answer: C
Denying Access to Billing Information: By creating an SCP and attaching it to the root OU, you can explicitly deny access to billing
information for all accounts within the organization. SCPs can be used to restrict access to various AWS services and actions, including
billing-related services.
Granular Control: SCPs enable you to define specific permissions and restrictions at the organizational unit level. By denying access to
billing information at the root OU, you can ensure that no member accounts, including root users, have access to the billing information.
upvoted 3 times
An ecommerce company runs an application in the AWS Cloud that is integrated with an on-premises warehouse solution. The company uses
Amazon Simple Notification Service (Amazon SNS) to send order messages to an on-premises HTTPS endpoint so the warehouse application can
process the orders. The local data center team has detected that some of the order messages were not received.
A solutions architect needs to retain messages that are not delivered and analyze the messages for up to 14 days.
Which solution will meet these requirements with the LEAST development effort?
A. Configure an Amazon SNS dead letter queue that has an Amazon Kinesis Data Stream target with a retention period of 14 days.
B. Add an Amazon Simple Queue Service (Amazon SQS) queue with a retention period of 14 days between the application and Amazon SNS.
C. Configure an Amazon SNS dead letter queue that has an Amazon Simple Queue Service (Amazon SQS) target with a retention period of 14
days.
D. Configure an Amazon SNS dead letter queue that has an Amazon DynamoDB target with a TTL attribute set for a retention period of 14
days.
Correct Answer: C
In SQS, DLQs store the messages that failed to be processed by your consumer application. This failure mode can happen when producers
and consumers fail to interpret aspects of the protocol that they use to communicate. In that case, the consumer receives the message
from the queue, but fails to process it, as the message doesn’t have the structure or content that the consumer expects. The consumer
can’t delete the message from the queue either. After exhausting the receive count in the redrive policy, SQS can sideline the message to
the DLQ. For more information, see Amazon SQS Dead-Letter Queues.
https://aws.amazon.com/blogs/compute/designing-durable-serverless-apps-with-dlqs-for-amazon-sns-amazon-sqs-aws-lambda/
upvoted 2 times
TariqKipkemei 3 months, 1 week ago
C is best to handle this requirement. Although good to note that dead-letter queue is an SQS queue.
"A dead-letter queue is an Amazon SQS queue that an Amazon SNS subscription can target for messages that can't be delivered to
subscribers successfully. Messages that can't be delivered due to client errors or server errors are held in the dead-letter queue for further
analysis or reprocessing."
https://docs.aws.amazon.com/sns/latest/dg/sns-dead-letter-
queues.html#:~:text=A%20dead%2Dletter%20queue%20is%20an%20Amazon%20SQS%20queue
upvoted 1 times
Amazon SQS is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and
serverless applications. Amazon SQS queues can be configured to have a retention period, which is the amount of time that messages will
be kept in the queue before they are deleted.
To meet the requirements of the company, you can configure an Amazon SNS dead letter queue that has an Amazon SQS target with a
retention period of 14 days. This will ensure that any messages that are not delivered to the on-premises warehouse application will be
stored in the Amazon SQS queue for up to 14 days. The company can then analyze the messages in the Amazon SQS queue to determine
why they were not delivered.
upvoted 1 times
A dead-letter queue is an Amazon SQS queue that an Amazon SNS subscription can target for messages that can't be delivered to
subscribers successfully.
upvoted 1 times
A gaming company uses Amazon DynamoDB to store user information such as geographic location, player data, and leaderboards. The company
needs to configure continuous backups to an Amazon S3 bucket with a minimal amount of coding. The backups must not affect availability of the
application and must not affect the read capacity units (RCUs) that are defined for the table.
A. Use an Amazon EMR cluster. Create an Apache Hive job to back up the data to Amazon S3.
B. Export the data directly from DynamoDB to Amazon S3 with continuous backups. Turn on point-in-time recovery for the table.
C. Configure Amazon DynamoDB Streams. Create an AWS Lambda function to consume the stream and export the data to an Amazon S3
bucket.
D. Create an AWS Lambda function to export the data from the database tables to Amazon S3 on a regular basis. Turn on point-in-time
recovery for the table.
Correct Answer: B
https://aws.amazon.com/blogs/aws/new-export-amazon-dynamodb-table-data-to-data-lake-amazon-s3/
upvoted 4 times
Export to Amazon S3: With continuous backups enabled, DynamoDB can directly export the backups to an Amazon S3 bucket. This
eliminates the need for custom coding to export the data.
Minimal Coding: Option B requires the least amount of coding effort as continuous backups and the export to Amazon S3 functionality are
built-in features of DynamoDB.
No Impact on Availability and RCUs: Enabling continuous backups and exporting data to Amazon S3 does not affect the availability of your
application or the read capacity units (RCUs) defined for the table. These operations happen in the background and do not impact the
table's performance or consume additional RCUs.
upvoted 2 times
A solutions architect is designing an asynchronous application to process credit card data validation requests for a bank. The application must be
secure and be able to process each request at least once.
A. Use AWS Lambda event source mapping. Set Amazon Simple Queue Service (Amazon SQS) standard queues as the event source. Use AWS
Key Management Service (SSE-KMS) for encryption. Add the kms:Decrypt permission for the Lambda execution role.
B. Use AWS Lambda event source mapping. Use Amazon Simple Queue Service (Amazon SQS) FIFO queues as the event source. Use SQS
managed encryption keys (SSE-SQS) for encryption. Add the encryption key invocation permission for the Lambda function.
C. Use the AWS Lambda event source mapping. Set Amazon Simple Queue Service (Amazon SQS) FIFO queues as the event source. Use AWS
KMS keys (SSE-KMS). Add the kms:Decrypt permission for the Lambda execution role.
D. Use the AWS Lambda event source mapping. Set Amazon Simple Queue Service (Amazon SQS) standard queues as the event source. Use
AWS KMS keys (SSE-KMS) for encryption. Add the encryption key invocation permission for the Lambda function.
Correct Answer: A
Using KMS keys (Options C and D) requires providing the Lambda role with decrypt permissions, adding complexity.
SQS FIFO queues with SSE-SQS encryption provide orderly, secure, server-side message processing that Lambda can consume without
needing to manage decryption. This is the most efficient and cost-effective approach.
upvoted 3 times
Amazon Simple Queue Service (SQS) FIFO queues are a good choice for this application because they guarantee that messages are
processed in the order in which they are received. This is important for credit card data validation because it ensures that fraudulent
transactions are not processed before legitimate transactions.
SQS managed encryption keys (SSE-SQS) are a good choice for encrypting the messages in the SQS queue because they are free to use.
AWS Key Management Service (KMS) keys (SSE-KMS) are also a good choice for encrypting the messages, but they do incur a cost.
upvoted 2 times
I would still go with the standard because of the keyword "at least once" because FIFO process "exactly once". That leaves us with A and D,
I believe that lambda function only needs to decrypt so I would choose A
upvoted 3 times
A company has multiple AWS accounts for development work. Some staff consistently use oversized Amazon EC2 instances, which causes the
company to exceed the yearly budget for the development accounts. The company wants to centrally restrict the creation of AWS resources in
these accounts.
Which solution will meet these requirements with the LEAST development effort?
A. Develop AWS Systems Manager templates that use an approved EC2 creation process. Use the approved Systems Manager templates to
provision EC2 instances.
B. Use AWS Organizations to organize the accounts into organizational units (OUs). Define and attach a service control policy (SCP) to control
the usage of EC2 instance types.
C. Configure an Amazon EventBridge rule that invokes an AWS Lambda function when an EC2 instance is created. Stop disallowed EC2
instance types.
D. Set up AWS Service Catalog products for the staff to create the allowed EC2 instance types. Ensure that staff can deploy EC2 instances
only by using the Service Catalog products.
Correct Answer: B
Option B - Using Organizations service control policies - requires no custom development. It involves:
Organizing accounts into OUs
Creating an SCP that defines allowed/disallowed EC2 instance types
Attaching the SCP to the appropriate OUs
This is a native AWS service with a simple UI for defining and managing policies. No coding or resource creation is needed.
So option B, using Organizations service control policies, will meet the requirements with the least development effort.
upvoted 3 times
cloudenthusiast 4 months, 2 weeks ago
Selected Answer: B
AWS Organizations: AWS Organizations is a service that helps you centrally manage multiple AWS accounts. It enables you to group
accounts into organizational units (OUs) and apply policies across those accounts.
Service Control Policies (SCPs): SCPs in AWS Organizations allow you to define fine-grained permissions and restrictions at the account or
OU level. By attaching an SCP to the development accounts, you can control the creation and usage of EC2 instance types.
Least Development Effort: Option B requires minimal development effort as it leverages the built-in features of AWS Organizations and
SCPs. You can define the SCP to restrict the use of oversized EC2 instance types and apply it to the appropriate OUs or accounts.
upvoted 3 times
A company wants to use artificial intelligence (AI) to determine the quality of its customer service calls. The company currently manages calls in
four different languages, including English. The company will offer new languages in the future. The company does not have the resources to
regularly maintain machine learning (ML) models.
The company needs to create written sentiment analysis reports from the customer service call recordings. The customer service call recording
text must be translated into English.
D. Use Amazon Transcribe to convert the audio recordings in any language into text.
A company uses Amazon EC2 instances to host its internal systems. As part of a deployment operation, an administrator tries to use the AWS CLI
to terminate an EC2 instance. However, the administrator receives a 403 (Access Denied) error message.
The administrator is using an IAM role that has the following IAM policy attached:
C. The "Action" field does not grant the actions that are required to terminate the EC2 instance.
D. The request to terminate the EC2 instance does not originate from the CIDR blocks 192.0.2.0/24 or 203.0.113.0/24.
Correct Answer: D
A company is conducting an internal audit. The company wants to ensure that the data in an Amazon S3 bucket that is associated with the
company’s AWS Lake Formation data lake does not contain sensitive customer or employee data. The company wants to discover personally
identifiable information (PII) or financial information, including passport numbers and credit card numbers.
A. Configure AWS Audit Manager on the account. Select the Payment Card Industry Data Security Standards (PCI DSS) for auditing.
B. Configure Amazon S3 Inventory on the S3 bucket Configure Amazon Athena to query the inventory.
C. Configure Amazon Macie to run a data discovery job that uses managed identifiers for the required data types.
Correct Answer: C
A company uses on-premises servers to host its applications. The company is running out of storage capacity. The applications use both block
storage and NFS storage. The company needs a high-performing solution that supports local caching without re-architecting its existing
applications.
Which combination of actions should a solutions architect take to meet these requirements? (Choose two.)
D. Deploy an AWS Storage Gateway volume gateway to replace the block storage.
E. Deploy Amazon Elastic File System (Amazon EFS) volumes and mount them to on-premises servers.
Correct Answer: BD
A company has a service that reads and writes large amounts of data from an Amazon S3 bucket in the same AWS Region. The service is
deployed on Amazon EC2 instances within the private subnet of a VPC. The service communicates with Amazon S3 over a NAT gateway in the
public subnet. However, the company wants a solution that will reduce the data output costs.
A. Provision a dedicated EC2 NAT instance in the public subnet. Configure the route table for the private subnet to use the elastic network
interface of this instance as the destination for all S3 traffic.
B. Provision a dedicated EC2 NAT instance in the private subnet. Configure the route table for the public subnet to use the elastic network
interface of this instance as the destination for all S3 traffic.
C. Provision a VPC gateway endpoint. Configure the route table for the private subnet to use the gateway endpoint as the route for all S3
traffic.
D. Provision a second NAT gateway. Configure the route table for the private subnet to use this NAT gateway as the destination for all S3
traffic.
Correct Answer: C
A company uses Amazon S3 to store high-resolution pictures in an S3 bucket. To minimize application changes, the company stores the pictures
as the latest version of an S3 object. The company needs to retain only the two most recent versions of the pictures.
The company wants to reduce costs. The company has identified the S3 bucket as a large expense.
Which solution will reduce the S3 costs with the LEAST operational overhead?
A. Use S3 Lifecycle to delete expired object versions and retain the two most recent versions.
B. Use an AWS Lambda function to check for older versions and delete all but the two most recent versions.
C. Use S3 Batch Operations to delete noncurrent object versions and retain only the two most recent versions.
D. Deactivate versioning on the S3 bucket and retain the two most recent versions.
Correct Answer: A
A company needs to minimize the cost of its 1 Gbps AWS Direct Connect connection. The company's average connection utilization is less than
10%. A solutions architect must recommend a solution that will reduce the cost without compromising security.
A. Set up a new 1 Gbps Direct Connect connection. Share the connection with another AWS account.
B. Set up a new 200 Mbps Direct Connect connection in the AWS Management Console.
C. Contact an AWS Direct Connect Partner to order a 1 Gbps connection. Share the connection with another AWS account.
D. Contact an AWS Direct Connect Partner to order a 200 Mbps hosted connection for an existing AWS account.
Correct Answer: B
https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect.html
upvoted 3 times
For Dedicated Connections, 1 Gbps, 10 Gbps, and 100 Gbps ports are available. For Hosted Connections, connection speeds of 50 Mbps,
100 Mbps, 200 Mbps, 300 Mbps, 400 Mbps, 500 Mbps, 1 Gbps, 2 Gbps, 5 Gbps and 10 Gbps may be ordered from approved AWS Direct
Connect Partners. See AWS Direct Connect Partners for more information.
upvoted 4 times
A company has multiple Windows file servers on premises. The company wants to migrate and consolidate its files into an Amazon FSx for
Windows File Server file system. File permissions must be preserved to ensure that access rights do not change.
A. Deploy AWS DataSync agents on premises. Schedule DataSync tasks to transfer the data to the FSx for Windows File Server file system.
B. Copy the shares on each file server into Amazon S3 buckets by using the AWS CLI. Schedule AWS DataSync tasks to transfer the data to the
FSx for Windows File Server file system.
C. Remove the drives from each file server. Ship the drives to AWS for import into Amazon S3. Schedule AWS DataSync tasks to transfer the
data to the FSx for Windows File Server file system.
D. Order an AWS Snowcone device. Connect the device to the on-premises network. Launch AWS DataSync agents on the device. Schedule
DataSync tasks to transfer the data to the FSx for Windows File Server file system.
E. Order an AWS Snowball Edge Storage Optimized device. Connect the device to the on-premises network. Copy data to the device by using
the AWS CLI. Ship the device back to AWS for import into Amazon S3. Schedule AWS DataSync tasks to transfer the data to the FSx for
Windows File Server file system.
Correct Answer: AD
A company wants to ingest customer payment data into the company's data lake in Amazon S3. The company receives payment data every minute
on average. The company wants to analyze the payment data in real time. Then the company wants to ingest the data into the data lake.
Which solution will meet these requirements with the MOST operational efficiency?
A. Use Amazon Kinesis Data Streams to ingest data. Use AWS Lambda to analyze the data in real time.
B. Use AWS Glue to ingest data. Use Amazon Kinesis Data Analytics to analyze the data in real time.
C. Use Amazon Kinesis Data Firehose to ingest data. Use Amazon Kinesis Data Analytics to analyze the data in real time.
D. Use Amazon API Gateway to ingest data. Use AWS Lambda to analyze the data in real time.
Correct Answer: A
A company runs a website that uses a content management system (CMS) on Amazon EC2. The CMS runs on a single EC2 instance and uses an
Amazon Aurora MySQL Multi-AZ DB instance for the data tier. Website images are stored on an Amazon Elastic Block Store (Amazon EBS) volume
that is mounted inside the EC2 instance.
Which combination of actions should a solutions architect take to improve the performance and resilience of the website? (Choose two.)
A. Move the website images into an Amazon S3 bucket that is mounted on every EC2 instance
B. Share the website images by using an NFS share from the primary EC2 instance. Mount this share on the other EC2 instances.
C. Move the website images onto an Amazon Elastic File System (Amazon EFS) file system that is mounted on every EC2 instance.
D. Create an Amazon Machine Image (AMI) from the existing EC2 instance. Use the AMI to provision new instances behind an Application
Load Balancer as part of an Auto Scaling group. Configure the Auto Scaling group to maintain a minimum of two instances. Configure an
accelerator in AWS Global Accelerator for the website
E. Create an Amazon Machine Image (AMI) from the existing EC2 instance. Use the AMI to provision new instances behind an Application
Load Balancer as part of an Auto Scaling group. Configure the Auto Scaling group to maintain a minimum of two instances. Configure an
Amazon CloudFront distribution for the website.
Correct Answer: DE
Migrating static website assets like images to Amazon S3 enables high scalability, durability and shared access across instances. This
improves performance.
Using Auto Scaling with load balancing provides elasticity and resilience. Adding a CloudFront distribution further boosts performance
through caching and content delivery.
upvoted 1 times
Even though there is no mention of 'cost efficient' in this question, in the real world cost is the no.1 factor.
In the exam I believe both options would be a pass.
https://aws.amazon.com/blogs/storage/mounting-amazon-s3-to-an-amazon-ec2-instance-using-a-private-connection-to-s3-file-gateway/
upvoted 3 times
AshutoshSingh1923 3 months ago
Selected Answer: CE
Option C provides moving the website images onto an Amazon EFS file system that is mounted on every EC2 instance. Amazon EFS
provides a scalable and fully managed file storage solution that can be accessed concurrently from multiple EC2 instances. This ensures
that the website images can be accessed efficiently and consistently by all instances, improving performance
In Option E The Auto Scaling group maintains a minimum of two instances, ensuring resilience by automatically replacing any unhealthy
instances. Additionally, configuring an Amazon CloudFront distribution for the website further improves performance by caching content
at edge locations closer to the end-users, reducing latency and improving content delivery.
Hence combining these actions, the website's performance is improved through efficient image storage and content delivery
upvoted 1 times
https://aws.amazon.com/blogs/storage/mounting-amazon-s3-to-an-amazon-ec2-instance-using-a-private-connection-to-s3-file-gateway/
upvoted 3 times
A company runs an infrastructure monitoring service. The company is building a new feature that will enable the service to monitor data in
customer AWS accounts. The new feature will call AWS APIs in customer accounts to describe Amazon EC2 instances and read Amazon
CloudWatch metrics.
What should the company do to obtain access to customer accounts in the MOST secure way?
A. Ensure that the customers create an IAM role in their account with read-only EC2 and CloudWatch permissions and a trust policy to the
company’s account.
B. Create a serverless API that implements a token vending machine to provide temporary AWS credentials for a role with read-only EC2 and
CloudWatch permissions.
C. Ensure that the customers create an IAM user in their account with read-only EC2 and CloudWatch permissions. Encrypt and store
customer access and secret keys in a secrets management system.
D. Ensure that the customers create an Amazon Cognito user in their account to use an IAM role with read-only EC2 and CloudWatch
permissions. Encrypt and store the Amazon Cognito user and password in a secrets management system.
Correct Answer: A
Having customers create a cross-account IAM role with the appropriate permissions, and configuring the trust policy to allow the
monitoring service principal account access, implements secure delegation and least privilege access.
upvoted 1 times
A company needs to connect several VPCs in the us-east-1 Region that span hundreds of AWS accounts. The company's networking team has its
own AWS account to manage the cloud network.
A. Set up VPC peering connections between each VPC. Update each associated subnet’s route table
B. Configure a NAT gateway and an internet gateway in each VPC to connect each VPC through the internet
C. Create an AWS Transit Gateway in the networking team’s AWS account. Configure static routes from each VPC.
D. Deploy VPN gateways in each VPC. Create a transit VPC in the networking team’s AWS account to connect to each VPC.
Correct Answer: C
Using AWS Transit Gateway allows all the VPCs to connect to a central hub without needing to create a mesh of VPC peering connections
between each VPC pair.
This significantly reduces the operational overhead of managing the network topology as new VPCs are added or changed.
The networking team can centrally manage the Transit Gateway routing and share it across accounts using Resource Access Manager.
upvoted 2 times
A company has Amazon EC2 instances that run nightly batch jobs to process data. The EC2 instances run in an Auto Scaling group that uses On-
Demand billing. If a job fails on one instance, another instance will reprocess the job. The batch jobs run between 12:00 AM and 06:00 AM local
time every day.
Which solution will provide EC2 instances to meet these requirements MOST cost-effectively?
A. Purchase a 1-year Savings Plan for Amazon EC2 that covers the instance family of the Auto Scaling group that the batch job uses.
B. Purchase a 1-year Reserved Instance for the specific instance type and operating system of the instances in the Auto Scaling group that the
batch job uses.
C. Create a new launch template for the Auto Scaling group. Set the instances to Spot Instances. Set a policy to scale out based on CPU
usage.
D. Create a new launch template for the Auto Scaling group. Increase the instance size. Set a policy to scale out based on CPU usage.
Correct Answer: C
Using Spot Instances allows EC2 capacity to be purchased at significant discounts compared to On-Demand prices. The auto scaling group
can scale out to add Spot Instances when needed for the batch jobs.
If Spot Instances become unavailable, regular On-Demand Instances will be launched instead to maintain capacity. The potential for
interruptions is acceptable since failed jobs can be re-run.
upvoted 2 times
A social media company is building a feature for its website. The feature will give users the ability to upload photos. The company expects
significant increases in demand during large events and must ensure that the website can handle the upload traffic from users.
A. Upload files from the user's browser to the application servers. Transfer the files to an Amazon S3 bucket.
B. Provision an AWS Storage Gateway file gateway. Upload files directly from the user's browser to the file gateway.
C. Generate Amazon S3 presigned URLs in the application. Upload files directly from the user's browser into an S3 bucket.
D. Provision an Amazon Elastic File System (Amazon EFS) file system. Upload files directly from the user's browser to the file system.
Correct Answer: C
Generating S3 presigned URLs allows users to upload directly to S3 instead of application servers. This removes the application servers as
a bottleneck for upload traffic.
S3 can scale to handle very high volumes of uploads with no limits on storage or throughput. Using presigned URLs leverages this
scalability.
upvoted 1 times
https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html
upvoted 2 times
A company has a web application for travel ticketing. The application is based on a database that runs in a single data center in North America.
The company wants to expand the application to serve a global user base. The company needs to deploy the application to multiple AWS Regions.
Average latency must be less than 1 second on updates to the reservation database.
The company wants to have separate deployments of its web platform across multiple Regions. However, the company must maintain a single
primary reservation database that is globally consistent.
A. Convert the application to use Amazon DynamoDB. Use a global table for the center reservation table. Use the correct Regional endpoint in
each Regional deployment.
B. Migrate the database to an Amazon Aurora MySQL database. Deploy Aurora Read Replicas in each Region. Use the correct Regional
endpoint in each Regional deployment for access to the database.
C. Migrate the database to an Amazon RDS for MySQL database. Deploy MySQL read replicas in each Region. Use the correct Regional
endpoint in each Regional deployment for access to the database.
D. Migrate the application to an Amazon Aurora Serverless database. Deploy instances of the database to each Region. Use the correct
Regional endpoint in each Regional deployment to access the database. Use AWS Lambda functions to process event streams in each Region
to synchronize the databases.
Correct Answer: B
DynamoDB global table allow BOTH reads and writes on all regions (“last writer wins”), so it is not single point of entry. You can set up IAM
identity based policy to restrict write access for global tables that are not in NA but it is not mentioned.
upvoted 1 times
Global reads with local latency – If you have offices around the world, you can use an Aurora global database to keep your main sources of
information updated in the primary AWS Region. Offices in your other Regions can access the information in their own Region, with local
latency.
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html
D. although D is also using Aurora Global Database, there is no need for Lambda function to sync data.
upvoted 1 times
Amazon Aurora Global Database is designed for globally distributed applications, allowing a single Amazon Aurora database to span
multiple AWS Regions. It replicates your data with no impact on database performance, enables fast local reads with low latency in each
Region, and provides disaster recovery from Region-wide outages.
Ref: https://aws.amazon.com/rds/aurora/global-database/
upvoted 1 times
https://aws.amazon.com/blogs/architecture/using-amazon-aurora-global-database-for-low-latency-without-application-changes/
upvoted 1 times
A company has migrated multiple Microsoft Windows Server workloads to Amazon EC2 instances that run in the us-west-1 Region. The company
manually backs up the workloads to create an image as needed.
In the event of a natural disaster in the us-west-1 Region, the company wants to recover workloads quickly in the us-west-2 Region. The company
wants no more than 24 hours of data loss on the EC2 instances. The company also wants to automate any backups of the EC2 instances.
Which solutions will meet these requirements with the LEAST administrative effort? (Choose two.)
A. Create an Amazon EC2-backed Amazon Machine Image (AMI) lifecycle policy to create a backup based on tags. Schedule the backup to run
twice daily. Copy the image on demand.
B. Create an Amazon EC2-backed Amazon Machine Image (AMI) lifecycle policy to create a backup based on tags. Schedule the backup to run
twice daily. Configure the copy to the us-west-2 Region.
C. Create backup vaults in us-west-1 and in us-west-2 by using AWS Backup. Create a backup plan for the EC2 instances based on tag values.
Create an AWS Lambda function to run as a scheduled job to copy the backup data to us-west-2.
D. Create a backup vault by using AWS Backup. Use AWS Backup to create a backup plan for the EC2 instances based on tag values. Define
the destination for the copy as us-west-2. Specify the backup schedule to run twice daily.
E. Create a backup vault by using AWS Backup. Use AWS Backup to create a backup plan for the EC2 instances based on tag values. Specify
the backup schedule to run twice daily. Copy on demand to us-west-2.
Correct Answer: BC
B uses EC2 image lifecycle policies to automatically create AMIs of the instances twice daily and copy them to the us-west-2 region. This
automates regional backups.
D leverages AWS Backup to define a backup plan that runs twice daily and copies backups to us-west-2. AWS Backup automates EC2
instance backups.
Together, these options provide automated, regional EC2 backup capabilities with minimal administrative overhead.
upvoted 1 times
Option D proposes using AWS Backup, which provides a centralized backup management solution. By creating a backup vault and backup
plan based on tag values, the company can automate the backup process for the EC2 instances. The backup schedule can be set to run
twice daily, and the destination for the copy can be defined as the us-west-2 Region.
upvoted 4 times
A company operates a two-tier application for image processing. The application uses two Availability Zones, each with one public subnet and one
private subnet. An Application Load Balancer (ALB) for the web tier uses the public subnets. Amazon EC2 instances for the application tier use
the private subnets.
Users report that the application is running more slowly than expected. A security audit of the web server log files shows that the application is
receiving millions of illegitimate requests from a small number of IP addresses. A solutions architect needs to resolve the immediate performance
problem while the company investigates a more permanent solution.
A. Modify the inbound security group for the web tier. Add a deny rule for the IP addresses that are consuming resources.
B. Modify the network ACL for the web tier subnets. Add an inbound deny rule for the IP addresses that are consuming resources.
C. Modify the inbound security group for the application tier. Add a deny rule for the IP addresses that are consuming resources.
D. Modify the network ACL for the application tier subnets. Add an inbound deny rule for the IP addresses that are consuming resources.
Correct Answer: B
Security group changes (Options A and C) would not be effective since the requests are not even reaching those resources.
Modifying the application tier ACL (Option D) would not stop the bad traffic from hitting the web tier.
upvoted 1 times
By adding an inbound deny rule specifically targeting the IP addresses that are consuming resources, the network ACL can block the
illegitimate traffic at the subnet level before it reaches the web servers. This will help alleviate the excessive load on the web tier and
improve the application's performance.
upvoted 4 times
A global marketing company has applications that run in the ap-southeast-2 Region and the eu-west-1 Region. Applications that run in a VPC in eu-
west-1 need to communicate securely with databases that run in a VPC in ap-southeast-2.
A. Create a VPC peering connection between the eu-west-1 VPC and the ap-southeast-2 VPC. Create an inbound rule in the eu-west-1
application security group that allows traffic from the database server IP addresses in the ap-southeast-2 security group.
B. Configure a VPC peering connection between the ap-southeast-2 VPC and the eu-west-1 VPC. Update the subnet route tables. Create an
inbound rule in the ap-southeast-2 database security group that references the security group ID of the application servers in eu-west-1.
C. Configure a VPC peering connection between the ap-southeast-2 VPC and the eu-west-1 VPUpdate the subnet route tables. Create an
inbound rule in the ap-southeast-2 database security group that allows traffic from the eu-west-1 application server IP addresses.
D. Create a transit gateway with a peering attachment between the eu-west-1 VPC and the ap-southeast-2 VPC. After the transit gateways are
properly peered and routing is configured, create an inbound rule in the database security group that references the security group ID of the
application servers in eu-west-1.
Correct Answer: B
Subnet Route Tables: After establishing the VPC peering connection, the subnet route tables need to be updated in both VPCs to route
traffic to the other VPC's CIDR blocks through the peering connection.
Inbound Rule in Database Security Group: By creating an inbound rule in the ap-southeast-2 database security group that allows traffic
from the eu-west-1 application server IP addresses, you ensure that only the specified application servers from the eu-west-1 VPC can
access the database servers in the ap-southeast-2 VPC.
upvoted 1 times
This option establishes the correct network connectivity for the applications in eu-west-1 to reach the databases in ap-southeast-2:
In the exam both option B and C would be a pass. In the real world both option will work.
upvoted 2 times
therefore, still C because we cannot reference SG ID of diff VPC, we should use the CIDR block
upvoted 1 times
Additionally, updating the subnet route tables is necessary to ensure that the traffic destined for the remote VPC is correctly routed
through the VPC peering connection.
To secure the communication, an inbound rule is created in the ap-southeast-2 database security group. This rule references the security
group ID of the application servers in the eu-west-1 VPC, allowing traffic only from those instances. This approach ensures that only the
authorized application servers can access the databases in the ap-southeast-2 VPC.
upvoted 3 times
Question #511 Topic 1
A company is developing software that uses a PostgreSQL database schema. The company needs to configure multiple development
environments and databases for the company's developers. On average, each development environment is used for half of the 8-hour workday.
A. Configure each development environment with its own Amazon Aurora PostgreSQL database
B. Configure each development environment with its own Amazon RDS for PostgreSQL Single-AZ DB instances
C. Configure each development environment with its own Amazon Aurora On-Demand PostgreSQL-Compatible database
D. Configure each development environment with its own Amazon S3 bucket by using Amazon S3 Object Select
Correct Answer: B
RDS Single-AZ instances only run the DB instance when in use, minimizing costs for dev environments not used full-time
RDS charges by the hour for DB instance hours used, versus Aurora clusters that have hourly uptime charges
PostgreSQL is natively supported by RDS so no compatibility issues
S3 Object Select (Option D) does not provide full database functionality
Aurora (Options A and C) has higher minimum costs than RDS even when not fully utilized
upvoted 2 times
The other options are not as cost-effective. Option A, configuring each development environment with its own Amazon Aurora PostgreSQL
database, would require you to pay for the database instance even when it is not in use. Option B, configuring each development
environment with its own Amazon RDS for PostgreSQL Single-AZ DB instance, would also require you to pay for the database instance
even when it is not in use. Option D, configuring each development environment with its own Amazon S3 bucket by using Amazon S3
Object Select, is not a viable option as Amazon S3 is not a database.
upvoted 1 times
A company uses AWS Organizations with resources tagged by account. The company also uses AWS Backup to back up its AWS infrastructure
resources. The company needs to back up all AWS resources.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS Config to identify all untagged resources. Tag the identified resources programmatically. Use tags in the backup plan.
B. Use AWS Config to identify all resources that are not running. Add those resources to the backup vault.
C. Require all AWS account owners to review their resources to identify the resources that need to be backed up.
Correct Answer: A
AWS Config continuously evaluates resource configurations and can identify untagged resources
Resources can be programmatically tagged via the AWS SDK based on Config data
Backup plans can use tag criteria to automatically back up newly tagged resources
No manual review or resource discovery needed
upvoted 1 times
A social media company wants to allow its users to upload images in an application that is hosted in the AWS Cloud. The company needs a
solution that automatically resizes the images so that the images can be displayed on multiple device types. The application experiences
unpredictable traffic patterns throughout the day. The company is seeking a highly available solution that maximizes scalability.
A. Create a static website hosted in Amazon S3 that invokes AWS Lambda functions to resize the images and store the images in an Amazon
S3 bucket.
B. Create a static website hosted in Amazon CloudFront that invokes AWS Step Functions to resize the images and store the images in an
Amazon RDS database.
C. Create a dynamic website hosted on a web server that runs on an Amazon EC2 instance. Configure a process that runs on the EC2 instance
to resize the images and store the images in an Amazon S3 bucket.
D. Create a dynamic website hosted on an automatically scaling Amazon Elastic Container Service (Amazon ECS) cluster that creates a resize
job in Amazon Simple Queue Service (Amazon SQS). Set up an image-resizing program that runs on an Amazon EC2 instance to process the
resize jobs.
Correct Answer: A
S3 static website provides high availability and auto scaling to handle unpredictable traffic
Lambda functions invoked from the S3 site can resize images on the fly
Storing images in S3 buckets provides durability, scalability and high throughput
Serverless approach with S3 and Lambda maximizes scalability and availability
upvoted 1 times
A company is running a microservices application on Amazon EC2 instances. The company wants to migrate the application to an Amazon Elastic
Kubernetes Service (Amazon EKS) cluster for scalability. The company must configure the Amazon EKS control plane with endpoint private access
set to true and endpoint public access set to false to maintain security compliance. The company must also put the data plane in private subnets.
However, the company has received error notifications because the node cannot join the cluster.
A. Grant the required permission in AWS Identity and Access Management (IAM) to the AmazonEKSNodeRole IAM role.
B. Create interface VPC endpoints to allow nodes to access the control plane.
C. Recreate nodes in the public subnet. Restrict security groups for EC2 nodes.
Correct Answer: B
https://repost.aws/knowledge-center/eks-worker-nodes-cluster
https://docs.aws.amazon.com/eks/latest/userguide/create-node-role.html
upvoted 1 times
Creating these interface endpoints allows the EKS nodes to communicate with the control plane privately within the VPC to join the cluster.
upvoted 1 times
VPC Endpoints: When the control plane is set to private access, you need to set up VPC endpoints for the Amazon EKS service so that
the nodes in your private subnets can communicate with the EKS control plane without going through the public internet. These are
known as interface VPC endpoints.
upvoted 1 times
Kubernetes API requests within your cluster's VPC (such as node to control plane communication) use the private VPC endpoint.
https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html
Answer is B
upvoted 1 times
The answer is A:
Nodes receive permissions for these API calls through an IAM instance profile and associated policies. Before you can launch nodes and
register them into a cluster, you must create an IAM role for those nodes to use when they are launched. This requirement applies to
nodes launched with the Amazon EKS optimized AMI provided by Amazon, or with any other node AMIs that you intend to use.
https://docs.aws.amazon.com/eks/latest/userguide/create-node-role.html
upvoted 3 times
https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html
upvoted 4 times
y0 4 months, 1 week ago
Selected Answer: A
Check this : https://docs.aws.amazon.com/eks/latest/userguide/create-node-role.html
Also, EKS does not require VPC endpoints. This is not the right use case for EKS
upvoted 4 times
A company is migrating an on-premises application to AWS. The company wants to use Amazon Redshift as a solution.
Which use cases are suitable for Amazon Redshift in this scenario? (Choose three.)
A. Supporting data APIs to access data with traditional, containerized, and event-driven applications
C. Building analytics workloads during specified hours and when the application is not active
E. Scaling globally to support petabytes of data and tens of millions of requests per minute
F. Creating a secondary replica of the cluster by using the AWS Management Console
B) Redshift supports both client-side and server-side encryption to protect sensitive data.
C) Redshift is well suited for running batch analytics workloads during off-peak times without affecting OLTP systems.
E) Redshift can scale to massive datasets and concurrent users to support large analytics workloads.
upvoted 1 times
In fact Redshift editor supports max 500 connections and workgroup support max 2000 connections at once, see it's quota page
Redshift has a cache layer, D is correct
upvoted 1 times
https://docs.aws.amazon.com/redshift/latest/mgmt/security-encryption.html
upvoted 1 times
C. Building analytics workloads during specified hours and when the application is not active: Amazon Redshift is optimized for running
complex analytic queries against very large datasets, making it a good choice for this use case.
E. Scaling globally to support petabytes of data and tens of millions of requests per minute: Amazon Redshift is designed to handle
petabytes of data, and to deliver fast query and I/O performance for virtually any size dataset.
upvoted 4 times
The Data API enables you to seamlessly access data from Redshift Serverless with all types of traditional, cloud-native, and containerized
serverless web service-based applications and event-driven applications.
upvoted 1 times
A company provides an API interface to customers so the customers can retrieve their financial information. Еhe company expects a larger
number of requests during peak usage times of the year.
The company requires the API to respond consistently with low latency to ensure customer satisfaction. The company needs to provide a compute
host for the API.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use an Application Load Balancer and Amazon Elastic Container Service (Amazon ECS).
B. Use Amazon API Gateway and AWS Lambda functions with provisioned concurrency.
C. Use an Application Load Balancer and an Amazon Elastic Kubernetes Service (Amazon EKS) cluster.
D. Use Amazon API Gateway and AWS Lambda functions with reserved concurrency.
Correct Answer: B
API Gateway handles the API requests and integration with Lambda
Lambda automatically scales compute without managing servers
Provisioned concurrency ensures consistent low latency by keeping functions initialized
No need to manage containers or orchestration platforms as with ECS/EKS
upvoted 1 times
https://docs.aws.amazon.com/lambda/latest/dg/provisioned-
concurrency.html#:~:text=for%20a%20function.-,Provisioned%20concurrency,-%E2%80%93%20Provisioned%20concurrency%20is
upvoted 1 times
A company wants to send all AWS Systems Manager Session Manager logs to an Amazon S3 bucket for archival purposes.
Which solution will meet this requirement with the MOST operational efficiency?
A. Enable S3 logging in the Systems Manager console. Choose an S3 bucket to send the session data to.
B. Install the Amazon CloudWatch agent. Push all logs to a CloudWatch log group. Export the logs to an S3 bucket from the group for archival
purposes.
C. Create a Systems Manager document to upload all server logs to a central S3 bucket. Use Amazon EventBridge to run the Systems Manager
document against all servers that are in the account daily.
D. Install an Amazon CloudWatch agent. Push all logs to a CloudWatch log group. Create a CloudWatch logs subscription that pushes any
incoming log events to an Amazon Kinesis Data Firehose delivery stream. Set Amazon S3 as the destination.
Correct Answer: D
https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-logging.html
upvoted 1 times
B could be an option, by installing a logging package on alle managed systems/ECs etc. https://docs.aws.amazon.com/systems-
manager/latest/userguide/distributor-working-with-packages-deploy.html
Option A simply enables S3 logging in the Systems Manager console, allowing you to directly send session logs to an S3 bucket. This
approach is straightforward and requires minimal configuration.
On the other hand, option D involves installing and configuring the Amazon CloudWatch agent, creating a CloudWatch log group, setting
up a CloudWatch Logs subscription, and configuring an Amazon Kinesis Data Firehose delivery stream to store logs in an S3 bucket. This
requires additional setup and management compared to option A.
So, if minimizing operational overhead is a priority, option A would be a simpler and more straightforward choice.
upvoted 3 times
An application uses an Amazon RDS MySQL DB instance. The RDS database is becoming low on disk space. A solutions architect wants to
increase the disk space without downtime.
Which solution meets these requirements with the LEAST amount of effort?
D. Back up the RDS database, increase the storage capacity, restore the database, and stop the previous instance
Correct Answer: A
https://aws.amazon.com/about-aws/whats-new/2019/06/rds-storage-auto-
scaling/#:~:text=of%20the%20rest.-,RDS%20Storage%20Auto%20Scaling,-continuously%20monitors%20actual
upvoted 1 times
A consulting company provides professional services to customers worldwide. The company provides solutions and tools for customers to
expedite gathering and analyzing data on AWS. The company needs to centrally manage and deploy a common set of solutions and tools for
customers to use for self-service purposes.
Correct Answer: B
Centralized management - Products can be maintained in a single catalog for easy discovery and governance.
Self-service access - Customers can deploy the solutions on their own without manual intervention.
Standardization - Products provide pre-defined templates for consistent deployment.
Access control - Granular permissions can be applied to restrict product visibility and access.
Reporting - Service Catalog provides detailed analytics on product usage and deployments.
upvoted 1 times
https://aws.amazon.com/servicecatalog/#:~:text=How%20it%20works-,AWS%20Service%20Catalog,-lets%20you%20centrally
upvoted 1 times
A company is designing a new web application that will run on Amazon EC2 Instances. The application will use Amazon DynamoDB for backend
data storage. The application traffic will be unpredictable. The company expects that the application read and write throughput to the database
will be moderate to high. The company needs to scale in response to application traffic.
Which DynamoDB table configuration will meet these requirements MOST cost-effectively?
A. Configure DynamoDB with provisioned read and write by using the DynamoDB Standard table class. Set DynamoDB auto scaling to a
maximum defined capacity.
B. Configure DynamoDB in on-demand mode by using the DynamoDB Standard table class.
C. Configure DynamoDB with provisioned read and write by using the DynamoDB Standard Infrequent Access (DynamoDB Standard-IA) table
class. Set DynamoDB auto scaling to a maximum defined capacity.
D. Configure DynamoDB in on-demand mode by using the DynamoDB Standard Infrequent Access (DynamoDB Standard-IA) table class.
Correct Answer: B
With On-Demand mode, you only pay for what you use instead of over-provisioning capacity. This avoids idle capacity costs.
DynamoDB Standard provides the fastest performance needed for moderate-high traffic apps vs Standard-IA which is for less frequent
access.
Auto scaling with provisioned capacity can also work but requires more administrative effort to tune the scaling thresholds.
upvoted 1 times
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html
upvoted 1 times
With provisioned capacity you can also use auto scaling to automatically adjust your table’s capacity based on the specified utilization rate
to ensure application performance, and also to potentially reduce costs. To configure auto scaling in DynamoDB, set the minimum and
maximum levels of read and write capacity in addition to the target utilization percentage."
https://docs.aws.amazon.com/wellarchitected/latest/serverless-applications-lens/capacity.html
upvoted 2 times
A retail company has several businesses. The IT team for each business manages its own AWS account. Each team account is part of an
organization in AWS Organizations. Each team monitors its product inventory levels in an Amazon DynamoDB table in the team's own AWS
account.
The company is deploying a central inventory reporting application into a shared AWS account. The application must be able to read items from all
the teams' DynamoDB tables.
A. Integrate DynamoDB with AWS Secrets Manager in the inventory application account. Configure the application to use the correct secret
from Secrets Manager to authenticate and read the DynamoDB table. Schedule secret rotation for every 30 days.
B. In every business account, create an IAM user that has programmatic access. Configure the application to use the correct IAM user access
key ID and secret access key to authenticate and read the DynamoDB table. Manually rotate IAM access keys every 30 days.
C. In every business account, create an IAM role named BU_ROLE with a policy that gives the role access to the DynamoDB table and a trust
policy to trust a specific role in the inventory application account. In the inventory account, create a role named APP_ROLE that allows access
to the STS AssumeRole API operation. Configure the application to use APP_ROLE and assume the crossaccount role BU_ROLE to read the
DynamoDB table.
D. Integrate DynamoDB with AWS Certificate Manager (ACM). Generate identity certificates to authenticate DynamoDB. Configure the
application to use the correct certificate to authenticate and read the DynamoDB table.
Correct Answer: C
Using cross-account IAM roles and role chaining allows the inventory application to securely access resources in other accounts. The roles
provide temporary credentials and can be permissions controlled.
upvoted 1 times
A company runs container applications by using Amazon Elastic Kubernetes Service (Amazon EKS). The company's workload is not consistent
throughout the day. The company wants Amazon EKS to scale in and out according to the workload.
Which combination of steps will meet these requirements with the LEAST operational overhead? (Choose two.)
C. Use the Kubernetes Cluster Autoscaler to manage the number of nodes in the cluster.
Correct Answer: BC
Using the Kubernetes Metrics Server (B) enables horizontal pod autoscaling to dynamically scale pods based on CPU/memory usage. This
allows scaling at the application tier level.
The Kubernetes Cluster Autoscaler (C) automatically adjusts the number of nodes in the EKS cluster in response to pod resource
requirements and events. This allows scaling at the infrastructure level.
upvoted 1 times
A company runs a microservice-based serverless web application. The application must be able to retrieve data from multiple Amazon DynamoDB
tables A solutions architect needs to give the application the ability to retrieve the data with no impact on the baseline performance of the
application.
Which solution will meet these requirements in the MOST operationally efficient way?
Correct Answer: A
A company wants to analyze and troubleshoot Access Denied errors and Unauthorized errors that are related to IAM permissions. The company
has AWS CloudTrail turned on.
Which solution will meet these requirements with the LEAST effort?
A. Use AWS Glue and write custom scripts to query CloudTrail logs for the errors.
B. Use AWS Batch and write custom scripts to query CloudTrail logs for the errors.
C. Search CloudTrail logs with Amazon Athena queries to identify the errors.
D. Search CloudTrail logs with Amazon QuickSight. Create a dashboard to identify the errors.
Correct Answer: C
With Athena, you can write simple SQL queries to filter the CloudTrail logs for the "AccessDenied" and "UnauthorizedOperation" error
codes. This will return the relevant log entries that you can then analyze.
upvoted 1 times
https://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html#:~:text=CloudTrail%20Lake%20documentation.-,Using%20Athena,-
with%20CloudTrail%20logs
upvoted 1 times
Amazon QuickSight supports logging the following actions as events in CloudTrail log files:
- Whether the request was made with root or AWS Identity and Access Management user credentials
- Whether the request was made with temporary security credentials for an IAM role or federated user
- Whether the request was made by another AWS service
https://docs.aws.amazon.com/quicksight/latest/user/logging-using-cloudtrail.html
upvoted 1 times
PCWu 3 months, 2 weeks ago
Selected Answer: C
The Answer will be C:
Need to use Athena to query keywords and sort out the error logs.
D: No need to use Amazon QuickSight to create the dashboard.
upvoted 1 times
A company wants to add its existing AWS usage cost to its operation cost dashboard. A solutions architect needs to recommend a solution that
will give the company access to its usage cost programmatically. The company must be able to access cost data for the current year and forecast
costs for the next 12 months.
Which solution will meet these requirements with the LEAST operational overhead?
A. Access usage cost-related data by using the AWS Cost Explorer API with pagination.
B. Access usage cost-related data by using downloadable AWS Cost Explorer report .csv files.
C. Configure AWS Budgets actions to send usage cost data to the company through FTP.
D. Create AWS Budgets reports for usage cost data. Send the data to the company through SMTP.
Correct Answer: D
A solutions architect is reviewing the resilience of an application. The solutions architect notices that a database administrator recently failed
over the application's Amazon Aurora PostgreSQL database writer instance as part of a scaling exercise. The failover resulted in 3 minutes of
downtime for the application.
Which solution will reduce the downtime for scaling exercises with the LEAST operational overhead?
A. Create more Aurora PostgreSQL read replicas in the cluster to handle the load during failover.
B. Set up a secondary Aurora PostgreSQL cluster in the same AWS Region. During failover, update the application to use the secondary
cluster's writer endpoint.
C. Create an Amazon ElastiCache for Memcached cluster to handle the load during failover.
D. Set up an Amazon RDS proxy for the database. Update the application to use the proxy endpoint.
Correct Answer: D
A company has a regional subscription-based streaming service that runs in a single AWS Region. The architecture consists of web servers and
application servers on Amazon EC2 instances. The EC2 instances are in Auto Scaling groups behind Elastic Load Balancers. The architecture
includes an Amazon Aurora global database cluster that extends across multiple Availability Zones.
The company wants to expand globally and to ensure that its application has minimal downtime.
A. Extend the Auto Scaling groups for the web tier and the application tier to deploy instances in Availability Zones in a second Region. Use an
Aurora global database to deploy the database in the primary Region and the second Region. Use Amazon Route 53 health checks with a
failover routing policy to the second Region.
B. Deploy the web tier and the application tier to a second Region. Add an Aurora PostgreSQL cross-Region Aurora Replica in the second
Region. Use Amazon Route 53 health checks with a failover routing policy to the second Region. Promote the secondary to primary as needed.
C. Deploy the web tier and the application tier to a second Region. Create an Aurora PostgreSQL database in the second Region. Use AWS
Database Migration Service (AWS DMS) to replicate the primary database to the second Region. Use Amazon Route 53 health checks with a
failover routing policy to the second Region.
D. Deploy the web tier and the application tier to a second Region. Use an Amazon Aurora global database to deploy the database in the
primary Region and the second Region. Use Amazon Route 53 health checks with a failover routing policy to the second Region. Promote the
secondary to primary as needed.
Correct Answer: B
A data analytics company wants to migrate its batch processing system to AWS. The company receives thousands of small data files periodically
during the day through FTP. An on-premises batch job processes the data files overnight. However, the batch job takes hours to finish running.
The company wants the AWS solution to process incoming data files as soon as possible with minimal changes to the FTP clients that send the
files. The solution must delete the incoming data files after the files have been processed successfully. Processing for each file needs to take 3-8
minutes.
Which solution will meet these requirements in the MOST operationally efficient way?
A. Use an Amazon EC2 instance that runs an FTP server to store incoming files as objects in Amazon S3 Glacier Flexible Retrieval. Configure a
job queue in AWS Batch. Use Amazon EventBridge rules to invoke the job to process the objects nightly from S3 Glacier Flexible Retrieval.
Delete the objects after the job has processed the objects.
B. Use an Amazon EC2 instance that runs an FTP server to store incoming files on an Amazon Elastic Block Store (Amazon EBS) volume.
Configure a job queue in AWS Batch. Use Amazon EventBridge rules to invoke the job to process the files nightly from the EBS volume. Delete
the files after the job has processed the files.
C. Use AWS Transfer Family to create an FTP server to store incoming files on an Amazon Elastic Block Store (Amazon EBS) volume. Configure
a job queue in AWS Batch. Use an Amazon S3 event notification when each file arrives to invoke the job in AWS Batch. Delete the files after the
job has processed the files.
D. Use AWS Transfer Family to create an FTP server to store incoming files in Amazon S3 Standard. Create an AWS Lambda function to
process the files and to delete the files after they are processed. Use an S3 event notification to invoke the Lambda function when the files
arrive.
Correct Answer: B
Use AWS Transfer Family for the FTP server to receive files directly into S3. This avoids managing FTP servers.
Process each file as soon as it arrives using Lambda triggered by S3 events. Lambda provides fast processing time per file.
Lambda can also delete files after processing succeeds.
Options A, B, C involve more operational overhead of managing FTP servers and batch jobs. Processing latency would be higher waiting
for batch windows.
Storing files in Glacier (Option A) adds latency for retrieving files.
upvoted 1 times
C. Wrong because AWS Batch is use for run large-scale or large amount of data in one time.
upvoted 1 times
D. Use AWS Transfer Family to create an FTP server to store incoming files in Amazon S3 Standard. Create an AWS Lambda function to
process the files and delete them after processing. Use an S3 event notification to invoke the Lambda function when the files arrive.
upvoted 1 times
"The company wants the AWS solution to process incoming data files <b>as soon as possible</b> with minimal changes to the FTP clients
that send the files."
upvoted 2 times
Question #529 Topic 1
A company is migrating its workloads to AWS. The company has transactional and sensitive data in its databases. The company wants to use
AWS Cloud solutions to increase security and reduce operational overhead for the databases.
A. Migrate the databases to Amazon EC2. Use an AWS Key Management Service (AWS KMS) AWS managed key for encryption.
C. Migrate the data to Amazon S3 Use Amazon Macie for data security and protection
D. Migrate the database to Amazon RDS. Use Amazon CloudWatch Logs for data security and protection.
Correct Answer: A
A company has an online gaming application that has TCP and UDP multiplayer gaming capabilities. The company uses Amazon Route 53 to point
the application traffic to multiple Network Load Balancers (NLBs) in different AWS Regions. The company needs to improve application
performance and decrease latency for the online game in preparation for user growth.
A. Add an Amazon CloudFront distribution in front of the NLBs. Increase the Cache-Control max-age parameter.
B. Replace the NLBs with Application Load Balancers (ALBs). Configure Route 53 to use latency-based routing.
C. Add AWS Global Accelerator in front of the NLBs. Configure a Global Accelerator endpoint to use the correct listener ports.
D. Add an Amazon API Gateway endpoint behind the NLBs. Enable API caching. Override method caching for the different stages.
Correct Answer: D
The application uses TCP and UDP for multiplayer gaming, so Network Load Balancers (NLBs) are appropriate.
AWS Global Accelerator can be added in front of the NLBs to improve performance and reduce latency by intelligently routing traffic
across AWS Regions and Availability Zones.
Global Accelerator provides static anycast IP addresses that act as a fixed entry point to application endpoints in the optimal AWS location.
This improves availability and reduces latency.
The Global Accelerator endpoint can be configured with the correct NLB listener ports for TCP and UDP.
upvoted 2 times
A company needs to integrate with a third-party data feed. The data feed sends a webhook to notify an external service when new data is ready for
consumption. A developer wrote an AWS Lambda function to retrieve data when the company receives a webhook callback. The developer must
make the Lambda function available for the third party to call.
Which solution will meet these requirements with the MOST operational efficiency?
A. Create a function URL for the Lambda function. Provide the Lambda function URL to the third party for the webhook.
B. Deploy an Application Load Balancer (ALB) in front of the Lambda function. Provide the ALB URL to the third party for the webhook.
C. Create an Amazon Simple Notification Service (Amazon SNS) topic. Attach the topic to the Lambda function. Provide the public hostname
of the SNS topic to the third party for the webhook.
D. Create an Amazon Simple Queue Service (Amazon SQS) queue. Attach the queue to the Lambda function. Provide the public hostname of
the SQS queue to the third party for the webhook.
Correct Answer: B
A company has a workload in an AWS Region. Customers connect to and access the workload by using an Amazon API Gateway REST API. The
company uses Amazon Route 53 as its DNS provider. The company wants to provide individual and secure URLs for all customers.
Which combination of steps will meet these requirements with the MOST operational efficiency? (Choose three.)
A. Register the required domain in a registrar. Create a wildcard custom domain name in a Route 53 hosted zone and record in the zone that
points to the API Gateway endpoint.
B. Request a wildcard certificate that matches the domains in AWS Certificate Manager (ACM) in a different Region.
C. Create hosted zones for each customer as required in Route 53. Create zone records that point to the API Gateway endpoint.
D. Request a wildcard certificate that matches the custom domain name in AWS Certificate Manager (ACM) in the same Region.
F. Create a custom domain name in API Gateway for the REST API. Import the certificate from AWS Certificate Manager (ACM).
Using a wildcard domain and certificate avoids managing individual domains/certs per customer. This is more efficient.
The domain, hosted zone, and certificate should all be in the same region as the API Gateway REST API for simplicity.
Creating multiple API endpoints per customer (Option E) adds complexity and is not required.
Option B and C add unnecessary complexity by separating domains, certificates, and hosted zones.
upvoted 2 times
Step D is to request a wildcard certificate from AWS Certificate Manager (ACM) that matches the custom domain name you created in Step
A. This wildcard certificate will cover all subdomains and ensure secure HTTPS communication.
Step F is to create a custom domain name in API Gateway for your REST API. This allows you to associate the custom domain name with
your API Gateway endpoints and import the certificate from ACM for secure communication.
upvoted 2 times
A company stores data in Amazon S3. According to regulations, the data must not contain personally identifiable information (PII). The company
recently discovered that S3 buckets have some objects that contain PII. The company needs to automatically detect PII in S3 buckets and to notify
the company’s security team.
A. Use Amazon Macie. Create an Amazon EventBridge rule to filter the SensitiveData event type from Macie findings and to send an Amazon
Simple Notification Service (Amazon SNS) notification to the security team.
B. Use Amazon GuardDuty. Create an Amazon EventBridge rule to filter the CRITICAL event type from GuardDuty findings and to send an
Amazon Simple Notification Service (Amazon SNS) notification to the security team.
C. Use Amazon Macie. Create an Amazon EventBridge rule to filter the SensitiveData:S3Object/Personal event type from Macie findings and to
send an Amazon Simple Queue Service (Amazon SQS) notification to the security team.
D. Use Amazon GuardDuty. Create an Amazon EventBridge rule to filter the CRITICAL event type from GuardDuty findings and to send an
Amazon Simple Queue Service (Amazon SQS) notification to the security team.
Correct Answer: C
A company wants to build a logging solution for its multiple AWS accounts. The company currently stores the logs from all accounts in a
centralized account. The company has created an Amazon S3 bucket in the centralized account to store the VPC flow logs and AWS CloudTrail
logs. All logs must be highly available for 30 days for frequent analysis, retained for an additional 60 days for backup purposes, and deleted 90
days after creation.
A. Transition objects to the S3 Standard storage class 30 days after creation. Write an expiration action that directs Amazon S3 to delete
objects after 90 days.
B. Transition objects to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class 30 days after creation. Move all objects to the S3
Glacier Flexible Retrieval storage class after 90 days. Write an expiration action that directs Amazon S3 to delete objects after 90 days.
C. Transition objects to the S3 Glacier Flexible Retrieval storage class 30 days after creation. Write an expiration action that directs Amazon
S3 to delete objects after 90 days.
D. Transition objects to the S3 One Zone-Infrequent Access (S3 One Zone-IA) storage class 30 days after creation. Move all objects to the S3
Glacier Flexible Retrieval storage class after 90 days. Write an expiration action that directs Amazon S3 to delete objects after 90 days.
Correct Answer: B
Also it says deletion after 90 days, so all answers specifying a transition after 90 days makes no sense.
upvoted 6 times
Even with the early deletion fee, it appears to me that answer 'A' would still be cheaper.
upvoted 1 times
A company is building an Amazon Elastic Kubernetes Service (Amazon EKS) cluster for its workloads. All secrets that are stored in Amazon EKS
must be encrypted in the Kubernetes etcd key-value store.
A. Create a new AWS Key Management Service (AWS KMS) key. Use AWS Secrets Manager to manage, rotate, and store all secrets in Amazon
EKS.
B. Create a new AWS Key Management Service (AWS KMS) key. Enable Amazon EKS KMS secrets encryption on the Amazon EKS cluster.
C. Create the Amazon EKS cluster with default options. Use the Amazon Elastic Block Store (Amazon EBS) Container Storage Interface (CSI)
driver as an add-on.
D. Create a new AWS Key Management Service (AWS KMS) key with the alias/aws/ebs alias. Enable default Amazon Elastic Block Store
(Amazon EBS) volume encryption for the account.
Correct Answer: D
https://eksctl.io/usage/kms-encryption/
upvoted 2 times
A company wants to provide data scientists with near real-time read-only access to the company's production Amazon RDS for PostgreSQL
database. The database is currently configured as a Single-AZ database. The data scientists use complex queries that will not affect the
production database. The company needs a solution that is highly available.
A. Scale the existing production database in a maintenance window to provide enough power for the data scientists.
B. Change the setup from a Single-AZ to a Multi-AZ instance deployment with a larger secondary standby instance. Provide the data scientists
access to the secondary instance.
C. Change the setup from a Single-AZ to a Multi-AZ instance deployment. Provide two additional read replicas for the data scientists.
D. Change the setup from a Single-AZ to a Multi-AZ cluster deployment with two readable standby instances. Provide read endpoints to the
data scientists.
Correct Answer: C
Data scientists need read-only access to near real-time production data without affecting performance.
High availability is required.
Cost should be minimized.
upvoted 1 times
only multi AZ cluster have reader endpoint. multi AZ instance secondary replicate is not allow to access
upvoted 1 times
msdnpro 2 months ago
Selected Answer: D
Support for D:
Amazon RDS now offers Multi-AZ deployments with readable standby instances (also called Multi-AZ DB cluster deployments) in preview.
You should consider using Multi-AZ DB cluster deployments with two readable DB instances if you need additional read capacity in your
Amazon RDS Multi-AZ deployment and if your application workload has strict transaction latency requirements such as single-digit
milliseconds transactions.
https://aws.amazon.com/blogs/database/readable-standby-instances-in-amazon-rds-multi-az-deployments-a-new-high-availability-option/
upvoted 1 times
while on read replicas, Amazon RDS then uses the asynchronous replication method for the DB engine to update the read replica
whenever there is a change to the primary DB instance. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html
upvoted 1 times
Single-AZ and Multi-AZ deployments: Pricing is billed per DB instance-hour consumed from the time a DB instance is launched until it is
stopped or deleted.
https://aws.amazon.com/rds/postgresql/pricing/?pg=pr&loc=3
In the case of a cluster, you will pay less.
upvoted 2 times
A company runs a three-tier web application in the AWS Cloud that operates across three Availability Zones. The application architecture has an
Application Load Balancer, an Amazon EC2 web server that hosts user session states, and a MySQL database that runs on an EC2 instance. The
company expects sudden increases in application traffic. The company wants to be able to scale to meet future application capacity demands and
to ensure high availability across all three Availability Zones.
A. Migrate the MySQL database to Amazon RDS for MySQL with a Multi-AZ DB cluster deployment. Use Amazon ElastiCache for Redis with
high availability to store session data and to cache reads. Migrate the web server to an Auto Scaling group that is in three Availability Zones.
B. Migrate the MySQL database to Amazon RDS for MySQL with a Multi-AZ DB cluster deployment. Use Amazon ElastiCache for Memcached
with high availability to store session data and to cache reads. Migrate the web server to an Auto Scaling group that is in three Availability
Zones.
C. Migrate the MySQL database to Amazon DynamoDB Use DynamoDB Accelerator (DAX) to cache reads. Store the session data in
DynamoDB. Migrate the web server to an Auto Scaling group that is in three Availability Zones.
D. Migrate the MySQL database to Amazon RDS for MySQL in a single Availability Zone. Use Amazon ElastiCache for Redis with high
availability to store session data and to cache reads. Migrate the web server to an Auto Scaling group that is in three Availability Zones.
Correct Answer: B
RDS Multi-AZ provides high availability for MySQL by synchronously replicating data across AZs. Automatic failover handles AZ outages.
ElastiCache for Redis is better suited for session data caching than Memcached. Redis offers more advanced data structures and flexibility.
Auto scaling across 3 AZs provides high availability for the web tier
upvoted 1 times
A global video streaming company uses Amazon CloudFront as a content distribution network (CDN). The company wants to roll out content in a
phased manner across multiple countries. The company needs to ensure that viewers who are outside the countries to which the company rolls
out content are not able to view the content.
A. Add geographic restrictions to the content in CloudFront by using an allow list. Set up a custom error message.
B. Set up a new URL tor restricted content. Authorize access by using a signed URL and cookies. Set up a custom error message.
C. Encrypt the data for the content that the company distributes. Set up a custom error message.
D. Create a new URL for restricted content. Set up a time-restricted access policy for signed URLs.
Correct Answer: A
A company wants to use the AWS Cloud to improve its on-premises disaster recovery (DR) configuration. The company's core production business
application uses Microsoft SQL Server Standard, which runs on a virtual machine (VM). The application has a recovery point objective (RPO) of 30
seconds or fewer and a recovery time objective (RTO) of 60 minutes. The DR solution needs to minimize costs wherever possible.
A. Configure a multi-site active/active setup between the on-premises server and AWS by using Microsoft SQL Server Enterprise with Always
On availability groups.
B. Configure a warm standby Amazon RDS for SQL Server database on AWS. Configure AWS Database Migration Service (AWS DMS) to use
change data capture (CDC).
C. Use AWS Elastic Disaster Recovery configured to replicate disk changes to AWS as a pilot light.
D. Use third-party backup software to capture backups every night. Store a secondary set of backups in Amazon S3.
Correct Answer: D
A company has an on-premises server that uses an Oracle database to process and store customer information. The company wants to use an
AWS database service to achieve higher availability and to improve application performance. The company also wants to offload reporting from its
primary database system.
Which solution will meet these requirements in the MOST operationally efficient way?
A. Use AWS Database Migration Service (AWS DMS) to create an Amazon RDS DB instance in multiple AWS Regions. Point the reporting
functions toward a separate DB instance from the primary DB instance.
B. Use Amazon RDS in a Single-AZ deployment to create an Oracle database. Create a read replica in the same zone as the primary DB
instance. Direct the reporting functions to the read replica.
C. Use Amazon RDS deployed in a Multi-AZ cluster deployment to create an Oracle database. Direct the reporting functions to use the reader
instance in the cluster deployment.
D. Use Amazon RDS deployed in a Multi-AZ instance deployment to create an Amazon Aurora database. Direct the reporting functions to the
reader instances.
Correct Answer: D
A and B discarted.
The answer is between C and D
D says use an Amazon RDS to build an Amazon Aurora, makes no sense.
C is the correct one, high availability in multi az deployment.
Also point the reporting to the reader replica.
upvoted 9 times
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RDS_Fea_Regions_DB-eng.Feature.MultiAZDBClusters.html
upvoted 8 times
RDS Multi-AZ has automatic failover between AZs. DMS and Aurora migrations (A, D) would incur more effort and downtime.
Single-AZ with a read replica (B) does not provide the AZ failover capability that Multi-AZ does.
upvoted 1 times
ukivanlamlpi 1 month, 3 weeks ago
Selected Answer: D
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html
upvoted 3 times
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/create-multi-az-db-cluster.html
upvoted 3 times
A company wants to build a web application on AWS. Client access requests to the website are not predictable and can be idle for a long time.
Only customers who have paid a subscription fee can have the ability to sign in and use the web application.
Which combination of steps will meet these requirements MOST cost-effectively? (Choose three.)
A. Create an AWS Lambda function to retrieve user information from Amazon DynamoDB. Create an Amazon API Gateway endpoint to accept
RESTful APIs. Send the API calls to the Lambda function.
B. Create an Amazon Elastic Container Service (Amazon ECS) service behind an Application Load Balancer to retrieve user information from
Amazon RDS. Create an Amazon API Gateway endpoint to accept RESTful APIs. Send the API calls to the Lambda function.
E. Use AWS Amplify to serve the frontend web content with HTML, CSS, and JS. Use an integrated Amazon CloudFront configuration.
F. Use Amazon S3 static web hosting with PHP, CSS, and JS. Use Amazon CloudFront to serve the frontend web content.
E) Use AWS Amplify to serve the frontend web content with HTML, CSS, and JS. Use an integrated CloudFront configuration.
F) Use Amazon S3 static web hosting with PHP, CSS, and JS. Use Amazon CloudFront to serve the frontend web content.
upvoted 1 times
Amazon S3 does not support server-side scripting such as PHP, JSP, or ASP.NET.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html?
icmpid=docs_amazons3_console#:~:text=website%20relies%20on-,server%2Dside,-processing%2C%20including%20server
Traffic can be idle for a long time = AWS Lambda
upvoted 1 times
https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/php_s3_code_examples.html
upvoted 1 times
Zox42 2 months, 3 weeks ago
Selected Answer: ACE
Answer is ACE
upvoted 1 times
https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html
upvoted 1 times
Option D is incorrect because User pools are for authentication (identity verification) while Identity pools are for authorization (access
control).
Option F is wrong because S3 web hosting only supports static web files like HTML/CSS, and does not support PHP or JavaScript.
upvoted 2 times
Identity pools are for authorization (access control). You can use identity pools to create unique identities for users and give them
access to other AWS services.
A media company uses an Amazon CloudFront distribution to deliver content over the internet. The company wants only premium customers to
have access to the media streams and file content. The company stores all content in an Amazon S3 bucket. The company also delivers content
on demand to customers for a specific purpose, such as movie rentals or music downloads.
C. Use origin access control (OAC) to limit the access of non-premium customers.
Correct Answer: B
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html#:~:text=CloudFront%20signed%20URLs
upvoted 1 times
A company runs Amazon EC2 instances in multiple AWS accounts that are individually bled. The company recently purchased a Savings Pian.
Because of changes in the company’s business requirements, the company has decommissioned a large number of EC2 instances. The company
wants to use its Savings Plan discounts on its other AWS accounts.
A. From the AWS Account Management Console of the management account, turn on discount sharing from the billing preferences section.
B. From the AWS Account Management Console of the account that purchased the existing Savings Plan, turn on discount sharing from the
billing preferences section. Include all accounts.
C. From the AWS Organizations management account, use AWS Resource Access Manager (AWS RAM) to share the Savings Plan with other
accounts.
D. Create an organization in AWS Organizations in a new payer account. Invite the other AWS accounts to join the organization from the
management account.
E. Create an organization in AWS Organizations in the existing AWS account with the existing EC2 instances and Savings Plan. Invite the other
AWS accounts to join the organization from the management account.
Correct Answer: AE
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_best-practices_mgmt-acct.html
upvoted 1 times
A. "turn on discount sharing" is ok. This case: Has discount for many EC2 instances in one account, then want to share with other user. At
E, create Organization, then share.
upvoted 1 times
Aigerim2010 2 months, 3 weeks ago
i had this question today
upvoted 4 times
A retail company uses a regional Amazon API Gateway API for its public REST APIs. The API Gateway endpoint is a custom domain name that
points to an Amazon Route 53 alias record. A solutions architect needs to create a solution that has minimal effects on customers and minimal
data loss to release the new version of APIs.
A. Create a canary release deployment stage for API Gateway. Deploy the latest API version. Point an appropriate percentage of traffic to the
canary stage. After API verification, promote the canary stage to the production stage.
B. Create a new API Gateway endpoint with a new version of the API in OpenAPI YAML file format. Use the import-to-update operation in
merge mode into the API in API Gateway. Deploy the new version of the API to the production stage.
C. Create a new API Gateway endpoint with a new version of the API in OpenAPI JSON file format. Use the import-to-update operation in
overwrite mode into the API in API Gateway. Deploy the new version of the API to the production stage.
D. Create a new API Gateway endpoint with new versions of the API definitions. Create a custom domain name for the new API Gateway API.
Point the Route 53 alias record to the new API Gateway API custom domain name.
Correct Answer: A
Canary release is a software development strategy in which a "new version of an API" (as well as other software) is deployed for testing
purposes.
upvoted 2 times
A company wants to direct its users to a backup static error page if the company's primary website is unavailable. The primary website's DNS
records are hosted in Amazon Route 53. The domain is pointing to an Application Load Balancer (ALB). The company needs a solution that
minimizes changes and infrastructure overhead.
A. Update the Route 53 records to use a latency routing policy. Add a static error page that is hosted in an Amazon S3 bucket to the records so
that the traffic is sent to the most responsive endpoints.
B. Set up a Route 53 active-passive failover configuration. Direct traffic to a static error page that is hosted in an Amazon S3 bucket when
Route 53 health checks determine that the ALB endpoint is unhealthy.
C. Set up a Route 53 active-active configuration with the ALB and an Amazon EC2 instance that hosts a static error page as endpoints.
Configure Route 53 to send requests to the instance only if the health checks fail for the ALB.
D. Update the Route 53 records to use a multivalue answer routing policy. Create a health check. Direct traffic to the website if the health
check passes. Direct traffic to a static error page that is hosted in Amazon S3 if the health check does not pass.
Correct Answer: B
Route 53 health checks can monitor the ALB health. If the ALB becomes unhealthy, traffic will automatically failover to the S3 static
website. This provides automatic failover with minimal configuration changes
upvoted 1 times
https://repost.aws/knowledge-center/fail-over-s3-r53
upvoted 1 times
Question #546 Topic 1
A recent analysis of a company's IT expenses highlights the need to reduce backup costs. The company's chief information officer wants to
simplify the on-premises backup infrastructure and reduce costs by eliminating the use of physical backup tapes. The company must preserve the
existing investment in the on-premises backup applications and workflows.
A. Set up AWS Storage Gateway to connect with the backup applications using the NFS interface.
B. Set up an Amazon EFS file system that connects with the backup applications using the NFS interface.
C. Set up an Amazon EFS file system that connects with the backup applications using the iSCSI interface.
D. Set up AWS Storage Gateway to connect with the backup applications using the iSCSI-virtual tape library (VTL) interface.
Correct Answer: D
https://aws.amazon.com/storagegateway/vtl/?nc1=h_ls
upvoted 1 times
Question #547 Topic 1
A company has data collection sensors at different locations. The data collection sensors stream a high volume of data to the company. The
company wants to design a platform on AWS to ingest and process high-volume streaming data. The solution must be scalable and support data
collection in near real time. The company must store the data in Amazon S3 for future reporting.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon Kinesis Data Firehose to deliver streaming data to Amazon S3.
C. Use AWS Lambda to deliver streaming data and store the data to Amazon S3.
D. Use AWS Database Migration Service (AWS DMS) to deliver streaming data to Amazon S3.
Correct Answer: A
A company has separate AWS accounts for its finance, data analytics, and development departments. Because of costs and security concerns, the
company wants to control which services each AWS account can use.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS Systems Manager templates to control which AWS services each department can use.
B. Create organization units (OUs) for each department in AWS Organizations. Attach service control policies (SCPs) to the OUs.
C. Use AWS CloudFormation to automatically provision only the AWS services that each department can use.
D. Set up a list of products in AWS Service Catalog in the AWS accounts to manage and control the usage of specific AWS services.
Correct Answer: B
A company has created a multi-tier application for its ecommerce website. The website uses an Application Load Balancer that resides in the
public subnets, a web tier in the public subnets, and a MySQL cluster hosted on Amazon EC2 instances in the private subnets. The MySQL
database needs to retrieve product catalog and pricing information that is hosted on the internet by a third-party provider. A solutions architect
must devise a strategy that maximizes security without increasing operational overhead.
A. Deploy a NAT instance in the VPC. Route all the internet-based traffic through the NAT instance.
B. Deploy a NAT gateway in the public subnets. Modify the private subnet route table to direct all internet-bound traffic to the NAT gateway.
C. Configure an internet gateway and attach it to the VPModify the private subnet route table to direct internet-bound traffic to the internet
gateway.
D. Configure a virtual private gateway and attach it to the VPC. Modify the private subnet route table to direct internet-bound traffic to the
virtual private gateway.
Correct Answer: B
A company is using AWS Key Management Service (AWS KMS) keys to encrypt AWS Lambda environment variables. A solutions architect needs to
ensure that the required permissions are in place to decrypt and use the environment variables.
Which steps must the solutions architect take to implement the correct permissions? (Choose two.)
D. Allow the Lambda execution role in the AWS KMS key policy.
E. Allow the Lambda resource policy in the AWS KMS key policy.
Correct Answer: BD
The Lambda execution role needs kms:Decrypt and kms:GenerateDataKey permissions added. The execution role governs what AWS
services the function code can access.
The KMS key policy needs to allow the Lambda execution role to have kms:Decrypt and kms:GenerateDataKey permissions for that specific
key. This allows the execution role to use that particular key.
upvoted 1 times
A company has a financial application that produces reports. The reports average 50 KB in size and are stored in Amazon S3. The reports are
frequently accessed during the first week after production and must be stored for several years. The reports must be retrievable within 6 hours.
A. Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Glacier after 7 days.
B. Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Standard-Infrequent Access (S3 Standard-IA) after 7 days.
C. Use S3 Intelligent-Tiering. Configure S3 Intelligent-Tiering to transition the reports to S3 Standard-Infrequent Access (S3 Standard-IA) and
S3 Glacier.
D. Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Glacier Deep Archive after 7 days.
Correct Answer: B
D is incorrect because S3 glacier deep archive needs 12 hours minimum to retrieve files
A company needs to optimize the cost of its Amazon EC2 instances. The company also needs to change the type and family of its EC2 instances
every 2-3 months.
D. Purchase an All Upfront EC2 Instance Savings Plan for a 1-year term.
Correct Answer: D
The company needs flexibility to change EC2 instance types and families every 2-3 months. This rules out Reserved Instances which lock
you into an instance type and family for 1-3 years.
A Compute Savings Plan allows switching instance types and families freely within the term as needed. No Upfront is more flexible than All
Upfront.
A 1-year term balances commitment and flexibility better than a 3-year term given the company's changing needs.
With No Upfront, the company only pays for usage monthly without an upfront payment. This optimizes cost.
upvoted 4 times
A solutions architect needs to review a company's Amazon S3 buckets to discover personally identifiable information (PII). The company stores
the PII data in the us-east-1 Region and us-west-2 Region.
Which solution will meet these requirements with the LEAST operational overhead?
A. Configure Amazon Macie in each Region. Create a job to analyze the data that is in Amazon S3.
B. Configure AWS Security Hub for all Regions. Create an AWS Config rule to analyze the data that is in Amazon S3.
Correct Answer: A
Amazon Macie is designed specifically for discovering and classifying sensitive data like PII in S3. This makes it the optimal service to use.
Macie can be enabled directly in the required Regions rather than enabling it across all Regions which is unnecessary. This minimizes
overhead.
Macie can be set up to automatically scan the specified S3 buckets on a schedule. No need to create separate jobs.
Security Hub is for security monitoring across AWS accounts, not specific for PII discovery. More overhead than needed.
Inspector and GuardDuty are not built for PII discovery in S3 buckets. They provide broader security capabilities.
upvoted 3 times
A company's SAP application has a backend SQL Server database in an on-premises environment. The company wants to migrate its on-premises
application and database server to AWS. The company needs an instance type that meets the high demands of its SAP database. On-premises
performance data shows that both the SAP application and the database have high memory utilization.
A. Use the compute optimized instance family for the application. Use the memory optimized instance family for the database.
B. Use the storage optimized instance family for both the application and the database.
C. Use the memory optimized instance family for both the application and the database.
D. Use the high performance computing (HPC) optimized instance family for the application. Use the memory optimized instance family for
the database.
Correct Answer: C
A company runs an application in a VPC with public and private subnets. The VPC extends across multiple Availability Zones. The application runs
on Amazon EC2 instances in private subnets. The application uses an Amazon Simple Queue Service (Amazon SQS) queue.
A solutions architect needs to design a secure solution to establish a connection between the EC2 instances and the SQS queue.
A. Implement an interface VPC endpoint for Amazon SQS. Configure the endpoint to use the private subnets. Add to the endpoint a security
group that has an inbound access rule that allows traffic from the EC2 instances that are in the private subnets.
B. Implement an interface VPC endpoint for Amazon SQS. Configure the endpoint to use the public subnets. Attach to the interface endpoint a
VPC endpoint policy that allows access from the EC2 instances that are in the private subnets.
C. Implement an interface VPC endpoint for Amazon SQS. Configure the endpoint to use the public subnets. Attach an Amazon SQS access
policy to the interface VPC endpoint that allows requests from only a specified VPC endpoint.
D. Implement a gateway endpoint for Amazon SQS. Add a NAT gateway to the private subnets. Attach an IAM role to the EC2 instances that
allows access to the SQS queue.
Correct Answer: A
A solutions architect is using an AWS CloudFormation template to deploy a three-tier web application. The web application consists of a web tier
and an application tier that stores and retrieves user data in Amazon DynamoDB tables. The web and application tiers are hosted on Amazon EC2
instances, and the database tier is not publicly accessible. The application EC2 instances need to access the DynamoDB tables without exposing
API credentials in the template.
A. Create an IAM role to read the DynamoDB tables. Associate the role with the application instances by referencing an instance profile.
B. Create an IAM role that has the required permissions to read and write from the DynamoDB tables. Add the role to the EC2 instance profile,
and associate the instance profile with the application instances.
C. Use the parameter section in the AWS CloudFormation template to have the user input access and secret keys from an already-created IAM
user that has the required permissions to read and write from the DynamoDB tables.
D. Create an IAM user in the AWS CloudFormation template that has the required permissions to read and write from the DynamoDB tables.
Use the GetAtt function to retrieve the access and secret keys, and pass them to the application instances through the user data.
Correct Answer: B
A solutions architect manages an analytics application. The application stores large amounts of semistructured data in an Amazon S3 bucket. The
solutions architect wants to use parallel data processing to process the data more quickly. The solutions architect also wants to use information
that is stored in an Amazon Redshift database to enrich the data.
A. Use Amazon Athena to process the S3 data. Use AWS Glue with the Amazon Redshift data to enrich the S3 data.
B. Use Amazon EMR to process the S3 data. Use Amazon EMR with the Amazon Redshift data to enrich the S3 data.
C. Use Amazon EMR to process the S3 data. Use Amazon Kinesis Data Streams to move the S3 data into Amazon Redshift so that the data
can be enriched.
D. Use AWS Glue to process the S3 data. Use AWS Lake Formation with the Amazon Redshift data to enrich the S3 data.
Correct Answer: D
Use Amazon EMR to process the semi-structured data in Amazon S3. EMR provides a managed Hadoop framework optimized for
processing large datasets in S3.
EMR supports parallel data processing across multiple nodes to speed up the processing.
EMR can integrate directly with Amazon Redshift using the EMR-Redshift integration. This allows querying the Redshift data from EMR and
joining it with the S3 data.
This enables enriching the semi-structured S3 data with the information stored in Redshift
upvoted 3 times
A company has two VPCs that are located in the us-west-2 Region within the same AWS account. The company needs to allow network traffic
between these VPCs. Approximately 500 GB of data transfer will occur between the VPCs each month.
A. Implement AWS Transit Gateway to connect the VPCs. Update the route tables of each VPC to use the transit gateway for inter-VPC
communication.
B. Implement an AWS Site-to-Site VPN tunnel between the VPCs. Update the route tables of each VPC to use the VPN tunnel for inter-VPC
communication.
C. Set up a VPC peering connection between the VPCs. Update the route tables of each VPC to use the VPC peering connection for inter-VPC
communication.
D. Set up a 1 GB AWS Direct Connect connection between the VPCs. Update the route tables of each VPC to use the Direct Connect connection
for inter-VPC communication.
Correct Answer: C
VPC peering provides private connectivity between VPCs without using public IP space.
Data transferred between peered VPCs is free as long as they are in the same region.
500 GB/month inter-VPC data transfer fits within peering free tier.
Transit Gateway (Option A) incurs hourly charges plus data transfer fees. More costly than peering.
Site-to-Site VPN (Option B) incurs hourly charges and data transfer fees. More expensive than peering.
Direct Connect (Option D) has high hourly charges and would be overkill for this use case.
upvoted 2 times
VPC peering is the most cost-effective way to connect two VPCs within the same region and AWS account. There are no additional charges
for VPC peering beyond standard data transfer rates.
Transit Gateway and VPN add additional hourly and data processing charges that are not necessary for simple VPC peering.
Direct Connect provides dedicated network connectivity, but is overkill for the relatively low inter-VPC data transfer needs described here.
It has high fixed costs plus data transfer rates.
For occasional inter-VPC communication of moderate data volumes within the same region and account, VPC peering is the most cost-
effective solution. It provides simple private connectivity without transfer charges or network appliances.
upvoted 2 times
Question #559 Topic 1
A company hosts multiple applications on AWS for different product lines. The applications use different compute resources, including Amazon
EC2 instances and Application Load Balancers. The applications run in different AWS accounts under the same organization in AWS Organizations
across multiple AWS Regions. Teams for each product line have tagged each compute resource in the individual accounts.
The company wants more details about the cost for each product line from the consolidated billing feature in Organizations.
Correct Answer: BE
User-defined tags were created by each product team to identify resources. Selecting the relevant tag in the Billing console will group
costs.
The tag must be activated from the Organizations management account to consolidate billing across all accounts.
AWS generated tags are predefined by AWS and won't align to product lines.
Resource Groups (Option C) helps manage resources but not billing.
Activating the tag from each account (Option D) is not needed since Organizations centralizes billing.
upvoted 2 times
A company's solutions architect is designing an AWS multi-account solution that uses AWS Organizations. The solutions architect has organized
the company's accounts into organizational units (OUs).
The solutions architect needs a solution that will identify any changes to the OU hierarchy. The solution also needs to notify the company's
operations team of any changes.
Which solution will meet these requirements with the LEAST operational overhead?
A. Provision the AWS accounts by using AWS Control Tower. Use account drift notifications to identify the changes to the OU hierarchy.
B. Provision the AWS accounts by using AWS Control Tower. Use AWS Config aggregated rules to identify the changes to the OU hierarchy.
C. Use AWS Service Catalog to create accounts in Organizations. Use an AWS CloudTrail organization trail to identify the changes to the OU
hierarchy.
D. Use AWS CloudFormation templates to create accounts in Organizations. Use the drift detection operation on a stack to identify the
changes to the OU hierarchy.
Correct Answer: A
https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html
https://docs.aws.amazon.com/controltower/latest/userguide/prevention-and-notification.html
upvoted 5 times
A company's website handles millions of requests each day, and the number of requests continues to increase. A solutions architect needs to
improve the response time of the web application. The solutions architect determines that the application needs to decrease latency when
retrieving product details from the Amazon DynamoDB table.
Which solution will meet these requirements with the LEAST amount of operational overhead?
A. Set up a DynamoDB Accelerator (DAX) cluster. Route all read requests through DAX.
B. Set up Amazon ElastiCache for Redis between the DynamoDB table and the web application. Route all read requests through Redis.
C. Set up Amazon ElastiCache for Memcached between the DynamoDB table and the web application. Route all read requests through
Memcached.
D. Set up Amazon DynamoDB Streams on the table, and have AWS Lambda read from the table and populate Amazon ElastiCache. Route all
read requests through ElastiCache.
Correct Answer: A
DAX provides a DynamoDB-compatible caching layer to reduce read latency. It is purpose-built for accelerating DynamoDB workloads.
Using DAX requires minimal application changes - only read requests are routed through it.
DAX handles caching logic automatically without needing complex integration code.
ElastiCache Redis/Memcached (Options B/C) require more integration work to sync DynamoDB data.
Using Lambda and Streams to populate ElastiCache (Option D) is a complex event-driven approach requiring ongoing maintenance.
DAX plugs in seamlessly to accelerate DynamoDB with very little operational overhead
upvoted 1 times
A solutions architect needs to ensure that API calls to Amazon DynamoDB from Amazon EC2 instances in a VPC do not travel across the internet.
Which combination of steps should the solutions architect take to meet this requirement? (Choose two.)
D. Create an elastic network interface for the endpoint in each of the subnets of the VPC.
E. Create a security group entry in the endpoint's security group to provide access.
Correct Answer: AB
Implement an interface VPC endpoint for Amazon SQS. Configure the endpoint to use the private subnets. Add to the endpoint a security
group that has an inbound access rule that allows traffic from the EC2 instances that are in the private subnets.
upvoted 1 times
A gateway endpoint for DynamoDB enables private connectivity between DynamoDB and the VPC. This allows EC2 instances to access
DynamoDB APIs without traversing the internet.
A security group entry is needed to allow the EC2 instances access to the DynamoDB endpoint over the VPC.
An interface endpoint is used for services like S3 and Systems Manager, not DynamoDB.
Route table entries route traffic within a VPC but do not affect external connectivity.
Elastic network interfaces are not needed for gateway endpoints.
upvoted 3 times
C,D,E work for any other aws services but for S3 and Dynamodb we use VPC endpoint
upvoted 2 times
A company runs its applications on both Amazon Elastic Kubernetes Service (Amazon EKS) clusters and on-premises Kubernetes clusters. The
company wants to view all clusters and workloads from a central location.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon CloudWatch Container Insights to collect and group the cluster information.
B. Use Amazon EKS Connector to register and connect all Kubernetes clusters.
C. Use AWS Systems Manager to collect and view the cluster information.
D. Use Amazon EKS Anywhere as the primary cluster to view the other clusters with native Kubernetes commands.
Correct Answer: B
You can use Amazon EKS Connector to register and connect any conformant Kubernetes cluster to AWS and visualize it in the Amazon EKS
console. After a cluster is connected, you can see the status, configuration, and workloads for that cluster in the Amazon EKS console. You
can use this feature to view connected clusters in Amazon EKS console, but you can't manage them. The Amazon EKS Connector requires
an agent that is an open source project on Github. For additional technical content, including frequently asked questions and
troubleshooting, see Troubleshooting issues in Amazon EKS Connector
The Amazon EKS Connector can connect the following types of Kubernetes clusters to Amazon EKS.
EKS Connector allows registering external Kubernetes clusters (on-premises and otherwise) with Amazon EKS
This provides a unified view and management of all clusters within the EKS console.
EKS Connector handles keeping resources in sync across connected clusters.
This centralized approach minimizes operational overhead compared to using separate tools.
CloudWatch Container Insights (Option A) only provides metrics and logs, not cluster management.
Systems Manager (Option C) is more general purpose and does not natively integrate with EKS.
EKS Anywhere (Option D) would not provide a single pane of glass for external clusters.
upvoted 2 times
https://docs.aws.amazon.com/eks/latest/userguide/eks-connector.html
upvoted 3 times
https://docs.aws.amazon.com/eks/latest/userguide/eks-connector.html
upvoted 1 times
Question #564 Topic 1
A company is building an ecommerce application and needs to store sensitive customer information. The company needs to give customers the
ability to complete purchase transactions on the website. The company also needs to ensure that sensitive customer data is protected, even from
database administrators.
A. Store sensitive data in an Amazon Elastic Block Store (Amazon EBS) volume. Use EBS encryption to encrypt the data. Use an IAM instance
role to restrict access.
B. Store sensitive data in Amazon RDS for MySQL. Use AWS Key Management Service (AWS KMS) client-side encryption to encrypt the data.
C. Store sensitive data in Amazon S3. Use AWS Key Management Service (AWS KMS) server-side encryption to encrypt the data. Use S3
bucket policies to restrict access.
D. Store sensitive data in Amazon FSx for Windows Server. Mount the file share on application servers. Use Windows file permissions to
restrict access.
Correct Answer: B
RDS MySQL provides a fully managed database service well suited for an ecommerce application.
AWS KMS client-side encryption allows encrypting sensitive data before it hits the database. The data remains encrypted at rest.
This protects sensitive customer data from database admins and privileged users.
EBS encryption (Option A) protects data at rest but not in use. IAM roles don't prevent admin access.
S3 (Option C) encrypts data at rest on the server side. Bucket policies don't restrict admin access.
FSx file permissions (Option D) don't prevent admin access to unencrypted data.
upvoted 2 times
A company has an on-premises MySQL database that handles transactional data. The company is migrating the database to the AWS Cloud. The
migrated database must maintain compatibility with the company's applications that use the database. The migrated database also must scale
automatically during periods of increased demand.
A. Use native MySQL tools to migrate the database to Amazon RDS for MySQL. Configure elastic storage scaling.
B. Migrate the database to Amazon Redshift by using the mysqldump utility. Turn on Auto Scaling for the Amazon Redshift cluster.
C. Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon Aurora. Turn on Aurora Auto Scaling.
D. Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon DynamoDB. Configure an Auto Scaling policy.
Correct Answer: C
DMS provides an easy migration path from MySQL to Aurora while minimizing downtime.
Aurora is a MySQL-compatible relational database service that will maintain compatibility with the company's applications.
Aurora Auto Scaling allows the database to automatically scale up and down based on demand to handle increased workloads.
RDS MySQL (Option A) does not scale as well as the Aurora architecture.
Redshift (Option B) is for analytics, not transactional data, and may not be compatible.
DynamoDB (Option D) is a NoSQL datastore and lacks MySQL compatibility.
upvoted 3 times
A company runs multiple Amazon EC2 Linux instances in a VPC across two Availability Zones. The instances host applications that use a
hierarchical directory structure. The applications need to read and write rapidly and concurrently to shared storage.
A. Create an Amazon S3 bucket. Allow access from all the EC2 instances in the VPC.
B. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system from each EC2 instance.
C. Create a file system on a Provisioned IOPS SSD (io2) Amazon Elastic Block Store (Amazon EBS) volume. Attach the EBS volume to all the
EC2 instances.
D. Create file systems on Amazon Elastic Block Store (Amazon EBS) volumes that are attached to each EC2 instance. Synchronize the EBS
volumes across the different EC2 instances.
Correct Answer: A
Amazon S3 is an object storage platform that uses a simple API for storing and accessing data. Applications that do not require a file
system structure and are designed to work with object storage can use Amazon S3 as a massively scalable, durable, low-cost object
storage solution.
upvoted 7 times
EFS provides a scalable, high performance NFS file system that can be concurrently accessed from multiple EC2 instances.
It supports the hierarchical directory structure needed by the applications.
EFS is elastic, growing and shrinking automatically as needed.
It can be accessed from instances across AZs, meeting the shared storage requirement.
S3 object storage (option A) lacks the file system semantics needed by the apps.
EBS volumes (options C and D) are attached to a single instance and would require replication and syncing to share across instances.
EFS is purpose-built for this use case of a shared file system across Linux instances and aligns best with the performance, concurrency,
and availability needs.
upvoted 2 times
Therefore, for a scenario where multiple EC2 instances need to rapidly and concurrently access shared storage with a hierarchical
directory structure, Amazon EFS is the best solution.
upvoted 2 times
https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html
upvoted 1 times
https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html
upvoted 1 times
A solutions architect is designing a workload that will store hourly energy consumption by business tenants in a building. The sensors will feed a
database through HTTP requests that will add up usage for each tenant. The solutions architect must use managed services when possible. The
workload will receive more features in the future as the solutions architect adds independent components.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon API Gateway with AWS Lambda functions to receive the data from the sensors, process the data, and store the data in an
Amazon DynamoDB table.
B. Use an Elastic Load Balancer that is supported by an Auto Scaling group of Amazon EC2 instances to receive and process the data from the
sensors. Use an Amazon S3 bucket to store the processed data.
C. Use Amazon API Gateway with AWS Lambda functions to receive the data from the sensors, process the data, and store the data in a
Microsoft SQL Server Express database on an Amazon EC2 instance.
D. Use an Elastic Load Balancer that is supported by an Auto Scaling group of Amazon EC2 instances to receive and process the data from the
sensors. Use an Amazon Elastic File System (Amazon EFS) shared file system to store the processed data.
Correct Answer: A
° API Gateway removes the need to manage servers to receive the HTTP requests from sensors
° Lambda functions provide a serverless compute layer to process data as needed
° DynamoDB is a fully managed NoSQL database that scales automatically
° This serverless architecture has minimal operational overhead to manage
° Options B, C, and D all require managing EC2 instances which increases ops workload
° Option C also adds SQL Server admin tasks and licensing costs
° Option D uses EFS file storage which requires capacity planning and management
upvoted 2 times
A solutions architect is designing the storage architecture for a new web application used for storing and viewing engineering drawings. All
application components will be deployed on the AWS infrastructure.
The application design must support caching to minimize the amount of time that users wait for the engineering drawings to load. The application
must be able to store petabytes of data.
Which combination of storage and caching should the solutions architect use?
C. Amazon Elastic Block Store (Amazon EBS) volumes with Amazon CloudFront
Correct Answer: A
S3 provides highly durable and scalable object storage capable of handling petabytes of data cost-effectively.
CloudFront can be used to cache S3 content at the edge, minimizing latency for users and speeding up access to the engineering
drawings.
The global CloudFront edge network is ideal for caching large amounts of static media like drawings.
EBS provides block storage but lacks the scale and durability of S3 for large media files.
Glacier is cheaper archival storage but has higher latency unsuited for frequent access.
Storage Gateway and ElastiCache may play a role but do not align as well to the main requirements.
upvoted 1 times
An Amazon EventBridge rule targets a third-party API. The third-party API has not received any incoming traffic. A solutions architect needs to
determine whether the rule conditions are being met and if the rule's target is being invoked.
B. Review events in the Amazon Simple Queue Service (Amazon SQS) dead-letter queue.
Correct Answer: A
Option A: CloudWatch metrics are used to track the performance of AWS resources. They are not used to store events.
Option B: Amazon SQS dead-letter queues are used to store messages that cannot be delivered to their intended recipients. They are not
used to store events.
Option D: AWS CloudTrail is a service that records AWS API calls. It can be used to track the activity of EventBridge rules, but it does not
store the events themselves.
upvoted 1 times
EventBridge sends metrics to Amazon CloudWatch every minute for everything from the number of matched events to the number of
times a target is invoked by a rule.
https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-monitoring.html
upvoted 1 times
AWS CloudTrail provides visibility into EventBridge operations by logging API calls made by EventBridge.
Checking the CloudTrail trails will show the PutEvents API calls made when EventBridge rules match an event pattern.
CloudTrail will also log the Invoke API call when the rule target is triggered.
CloudWatch metrics and logs contain runtime performance data but not info on rule evaluation and targeting.
SQS dead letter queues collect failed event deliveries but won't provide insights on successful invocations.
CloudTrail is purpose-built to log operational events and API activity so it can confirm if the EventBridge rule is being evaluated and
triggering the target as expected.
upvoted 2 times
A company has a large workload that runs every Friday evening. The workload runs on Amazon EC2 instances that are in two Availability Zones in
the us-east-1 Region. Normally, the company must run no more than two instances at all times. However, the company wants to scale up to six
instances each Friday to handle a regularly repeating increased workload.
Which solution will meet these requirements with the LEAST operational overhead?
Correct Answer: A
https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-scheduled-scaling.html
upvoted 5 times
Auto Scaling scheduled actions allow defining specific dates/times to scale out or in. This can be used to scale to 6 instances every Friday
evening automatically.
Scheduled scaling removes the need for manual intervention to scale up/down for the workload.
EventBridge reminders and manual scaling require human involvement each week adding overhead.
Automatic scaling responds to demand and may not align perfectly to scale out every Friday without additional tuning.
Scheduled Auto Scaling actions provide the automation needed to scale for the weekly workload without ongoing operational overhead.
upvoted 1 times
A company is creating a REST API. The company has strict requirements for the use of TLS. The company requires TLSv1.3 on the API endpoints.
The company also requires a specific public third-party certificate authority (CA) to sign the TLS certificate.
A. Use a local machine to create a certificate that is signed by the third-party CImport the certificate into AWS Certificate Manager (ACM).
Create an HTTP API in Amazon API Gateway with a custom domain. Configure the custom domain to use the certificate.
B. Create a certificate in AWS Certificate Manager (ACM) that is signed by the third-party CA. Create an HTTP API in Amazon API Gateway with
a custom domain. Configure the custom domain to use the certificate.
C. Use AWS Certificate Manager (ACM) to create a certificate that is signed by the third-party CA. Import the certificate into AWS Certificate
Manager (ACM). Create an AWS Lambda function with a Lambda function URL. Configure the Lambda function URL to use the certificate.
D. Create a certificate in AWS Certificate Manager (ACM) that is signed by the third-party CA. Create an AWS Lambda function with a Lambda
function URL. Configure the Lambda function URL to use the certificate.
Correct Answer: A
B : Everything looks logic but we need a specific public CA to sign the certificate, I am not sure if we all the CAs in the ACM
C and D are not correct because we need API gateway for the HTTP
upvoted 2 times
A company runs an application on AWS. The application receives inconsistent amounts of usage. The application uses AWS Direct Connect to
connect to an on-premises MySQL-compatible database. The on-premises database consistently uses a minimum of 2 GiB of memory.
The company wants to migrate the on-premises database to a managed AWS service. The company wants to use auto scaling capabilities to
manage unexpected workload increases.
Which solution will meet these requirements with the LEAST administrative overhead?
A. Provision an Amazon DynamoDB database with default read and write capacity settings.
B. Provision an Amazon Aurora database with a minimum capacity of 1 Aurora capacity unit (ACU).
C. Provision an Amazon Aurora Serverless v2 database with a minimum capacity of 1 Aurora capacity unit (ACU).
Correct Answer: C
Aurora Serverless v2 provides auto-scaling so the database can handle inconsistent workloads and spikes automatically without admin
intervention.
It can scale down to zero when not in use to minimize costs.
The minimum 1 ACU capacity is sufficient to replace the on-prem 2 GiB database based on the info given.
Serverless capabilities reduce admin overhead for capacity management.
DynamoDB lacks MySQL compatibility and requires more hands-on management.
RDS and provisioned Aurora require manually resizing instances to scale, increasing admin overhead.
upvoted 2 times
Instead of provisioning and managing database servers, you specify Aurora capacity units (ACUs). Each ACU is a combination of
approximately 2 gigabytes (GB) of memory, corresponding CPU, and networking. Database storage automatically scales from 10 gibibytes
(GiB) to 128 tebibytes (TiB), the same as storage in a standard Aurora DB cluster
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v1.how-it-works.html
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.html
upvoted 1 times
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.how-it-works.html#aurora-serverless-v2.how-it-
works.capacity
upvoted 2 times
Question #573 Topic 1
A company wants to use an event-driven programming model with AWS Lambda. The company wants to reduce startup latency for Lambda
functions that run on Java 11. The company does not have strict latency requirements for the applications. The company wants to reduce cold
starts and outlier latencies when a function scales up.
Correct Answer: C
With SnapStart, Lambda initializes your function when you publish a function version. Lambda takes a Firecracker microVM snapshot of
the memory and disk state of the initialized execution environment, encrypts the snapshot, and caches it for low-latency access. When you
invoke the function version for the first time, and as the invocations scale up, Lambda resumes new execution environments from the
cached snapshot instead of initializing them from scratch, improving startup latency.
upvoted 1 times
SnapStart keeps functions initialized and ready to respond quickly, eliminating cold starts.
SnapStart is optimized for applications without aggressive latency needs, reducing costs.
It scales automatically to match traffic spikes, eliminating outliers when scaling up.
SnapStart is a native Lambda feature with no additional charges, keeping costs low.
Provisioned concurrency incurs charges for always-on capacity reserved. More costly than SnapStart.
Increasing timeout and memory do not directly improve startup performance like SnapStart.
upvoted 4 times
Lambda SnapStart for Java can improve startup performance for latency-sensitive applications by up to 10x at no extra cost, typically with
no changes to your function code. The largest contributor to startup latency (often referred to as cold start time) is the time that Lambda
spends initializing the function, which includes loading the function's code, starting the runtime, and initializing the function code.
https://docs.aws.amazon.com/lambda/latest/dg/snapstart.html
upvoted 2 times
https://docs.aws.amazon.com/lambda/latest/dg/snapstart.html
upvoted 2 times
A financial services company launched a new application that uses an Amazon RDS for MySQL database. The company uses the application to
track stock market trends. The company needs to operate the application for only 2 hours at the end of each week. The company needs to
optimize the cost of running the database.
A. Migrate the existing RDS for MySQL database to an Aurora Serverless v2 MySQL database cluster.
B. Migrate the existing RDS for MySQL database to an Aurora MySQL database cluster.
C. Migrate the existing RDS for MySQL database to an Amazon EC2 instance that runs MySQL. Purchase an instance reservation for the EC2
instance.
D. Migrate the existing RDS for MySQL database to an Amazon Elastic Container Service (Amazon ECS) cluster that uses MySQL container
images to run tasks.
Correct Answer: A
Aurora Serverless v2 scales compute capacity automatically based on actual usage, down to zero when not in use. This minimizes costs for
intermittent usage.
Since it only runs for 2 hours per week, the application is ideal for a serverless architecture like Aurora Serverless.
Aurora Serverless v2 charges per second when the database is active, unlike RDS which charges hourly.
Aurora Serverless provides higher availability than self-managed MySQL on EC2 or ECS.
Using reserved EC2 instances or ECS still incurs charges when not in use versus the fine-grained scaling of serverless.
Standard Aurora clusters have a minimum capacity unlike the auto-scaling serverless architecture.
upvoted 4 times
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.how-it-works.html
upvoted 1 times
A company deploys its applications on Amazon Elastic Kubernetes Service (Amazon EKS) behind an Application Load Balancer in an AWS Region.
The application needs to store data in a PostgreSQL database engine. The company wants the data in the database to be highly available. The
company also needs increased capacity for read workloads.
Which solution will meet these requirements with the MOST operational efficiency?
Correct Answer: B
A company is building a RESTful serverless web application on AWS by using Amazon API Gateway and AWS Lambda. The users of this web
application will be geographically distributed, and the company wants to reduce the latency of API requests to these users.
Which type of endpoint should a solutions architect use to meet these requirements?
A. Private endpoint
B. Regional endpoint
D. Edge-optimized endpoint
Correct Answer: D
A company uses an Amazon CloudFront distribution to serve content pages for its website. The company needs to ensure that clients use a TLS
certificate when accessing the company's website. The company wants to automate the creation and renewal of the TLS certificates.
Which solution will meet these requirements with the MOST operational efficiency?
C. Use AWS Certificate Manager (ACM) to create a certificate. Use DNS validation for the domain.
D. Use AWS Certificate Manager (ACM) to create a certificate. Use email validation for the domain.
Correct Answer: D
AWS Certificate Manager (ACM) provides free public TLS/SSL certificates and handles certificate renewals automatically.
Using DNS validation with ACM is operationally efficient since it automatically makes changes to Route 53 rather than requiring manual
validation steps.
ACM integrates natively with CloudFront distributions for delivering HTTPS content.
CloudFront security policies and origin access controls do not issue TLS certificates.
Email validation requires manual steps to approve the domain validation emails for each renewal.
upvoted 2 times
"ACM provides managed renewal for your Amazon-issued SSL/TLS certificates. This means that ACM will either renew your certificates
automatically (if you are using DNS validation), or it will send you email notices when expiration is approaching. These services are
provided for both public and private ACM certificates."
https://docs.aws.amazon.com/acm/latest/userguide/managed-renewal.html
upvoted 3 times
Question #578 Topic 1
A company deployed a serverless application that uses Amazon DynamoDB as a database layer. The application has experienced a large increase
in users. The company wants to improve database response time from milliseconds to microseconds and to cache requests to the database.
Which solution will meet these requirements with the LEAST operational overhead?
Correct Answer: A
https://aws.amazon.com/dynamodb/dax/#:~:text=Amazon%20DynamoDB%20Accelerator%20(DAX)%20is,millions%20of%20requests%20p
er%20second.
upvoted 3 times
A company runs an application that uses Amazon RDS for PostgreSQL. The application receives traffic only on weekdays during business hours.
The company wants to optimize costs and reduce operational overhead based on this usage.
A. Use the Instance Scheduler on AWS to configure start and stop schedules.
B. Turn off automatic backups. Create weekly manual snapshots of the database.
C. Create a custom AWS Lambda function to start and stop the database based on minimum CPU utilization.
Correct Answer: C
https://aws.amazon.com/solutions/implementations/instance-scheduler-on-aws/?nc1=h_ls
upvoted 1 times
https://aws.amazon.com/solutions/implementations/instance-scheduler-on-aws/
upvoted 1 times
A company uses locally attached storage to run a latency-sensitive application on premises. The company is using a lift and shift method to move
the application to the AWS Cloud. The company does not want to change the application architecture.
A. Configure an Auto Scaling group with an Amazon EC2 instance. Use an Amazon FSx for Lustre file system to run the application.
B. Host the application on an Amazon EC2 instance. Use an Amazon Elastic Block Store (Amazon EBS) GP2 volume to run the application.
C. Configure an Auto Scaling group with an Amazon EC2 instance. Use an Amazon FSx for OpenZFS file system to run the application.
D. Host the application on an Amazon EC2 instance. Use an Amazon Elastic Block Store (Amazon EBS) GP3 volume to run the application.
Correct Answer: B
A company runs a stateful production application on Amazon EC2 instances. The application requires at least two EC2 instances to always be
running.
A solutions architect needs to design a highly available and fault-tolerant architecture for the application. The solutions architect creates an Auto
Scaling group of EC2 instances.
Which set of additional steps should the solutions architect take to meet these requirements?
A. Set the Auto Scaling group's minimum capacity to two. Deploy one On-Demand Instance in one Availability Zone and one On-Demand
Instance in a second Availability Zone.
B. Set the Auto Scaling group's minimum capacity to four. Deploy two On-Demand Instances in one Availability Zone and two On-Demand
Instances in a second Availability Zone.
C. Set the Auto Scaling group's minimum capacity to two. Deploy four Spot Instances in one Availability Zone.
D. Set the Auto Scaling group's minimum capacity to four. Deploy two On-Demand Instances in one Availability Zone and two Spot Instances in
a second Availability Zone.
Correct Answer: D
The application requires at least two EC2 instances to always be running = 2 minimum capacity… minimum cap of 4 ec2 will work but a
waste of resources that doesn’t follow well archi. framework.
upvoted 1 times
An ecommerce company uses Amazon Route 53 as its DNS provider. The company hosts its website on premises and in the AWS Cloud. The
company's on-premises data center is near the us-west-1 Region. The company uses the eu-central-1 Region to host the website. The company
wants to minimize load time for the website as much as possible.
A. Set up a geolocation routing policy. Send the traffic that is near us-west-1 to the on-premises data center. Send the traffic that is near eu-
central-1 to eu-central-1.
B. Set up a simple routing policy that routes all traffic that is near eu-central-1 to eu-central-1 and routes all traffic that is near the on-premises
datacenter to the on-premises data center.
D. Set up a weighted routing policy. Split the traffic evenly between eu-central-1 and the on-premises data center.
Correct Answer: A
Geolocation routing allows you to route users to the closest endpoint based on their geographic location. This will provide the lowest
latency.
Routing us-west-1 traffic to the on-premises data center minimizes latency for those users since it is also located near there.
Routing eu-central-1 traffic to the eu-central-1 AWS region minimizes latency for users nearby.
This achieves routing users to the closest endpoint on a geographic basis to optimize for low latency.
upvoted 2 times
A company has 5 PB of archived data on physical tapes. The company needs to preserve the data on the tapes for another 10 years for
compliance purposes. The company wants to migrate to AWS in the next 6 months. The data center that stores the tapes has a 1 Gbps uplink
internet connectivity.
A. Read the data from the tapes on premises. Stage the data in a local NFS storage. Use AWS DataSync to migrate the data to Amazon S3
Glacier Flexible Retrieval.
B. Use an on-premises backup application to read the data from the tapes and to write directly to Amazon S3 Glacier Deep Archive.
C. Order multiple AWS Snowball devices that have Tape Gateway. Copy the physical tapes to virtual tapes in Snowball. Ship the Snowball
devices to AWS. Create a lifecycle policy to move the tapes to Amazon S3 Glacier Deep Archive.
D. Configure an on-premises Tape Gateway. Create virtual tapes in the AWS Cloud. Use backup software to copy the physical tape to the virtual
tape.
Correct Answer: C
If you are looking for a cost-effective, durable, long-term, offsite alternative for data archiving, deploy a Tape Gateway. With its virtual tape
library (VTL) interface, you can use your existing tape-based backup software infrastructure to store data on virtual tape cartridges that
you create -
https://docs.aws.amazon.com/storagegateway/latest/tgw/WhatIsStorageGateway.html
upvoted 1 times
This solution is the most cost-effective because it uses the least amount of bandwidth. AWS DataSync is a service that transfers data
between on-premises storage and Amazon S3. It uses a variety of techniques to optimize the transfer speed and reduce c
upvoted 1 times
A company is deploying an application that processes large quantities of data in parallel. The company plans to use Amazon EC2 instances for
the workload. The network architecture must be configurable to prevent groups of nodes from sharing the same underlying hardware.
Correct Answer: A
Configuring the EC2 instances with dedicated tenancy ensures that each instance will run on isolated, single-tenant hardware. This meets
the requirement to prevent groups of nodes from sharing underlying hardware.
A spread placement group only provides isolation at the Availability Zone level. Instances could still share hardware within an AZ.
upvoted 2 times
A solutions architect is designing a disaster recovery (DR) strategy to provide Amazon EC2 capacity in a failover AWS Region. Business
requirements state that the DR strategy must meet capacity in the failover Region.
Correct Answer: C
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-capacity-reservations.html
upvoted 1 times
Question #586 Topic 1
A company has five organizational units (OUs) as part of its organization in AWS Organizations. Each OU correlates to the five businesses that the
company owns. The company's research and development (R&D) business is separating from the company and will need its own organization. A
solutions architect creates a separate new management account for this purpose.
What should the solutions architect do next in the new management account?
A. Have the R&D AWS account be part of both organizations during the transition.
B. Invite the R&D AWS account to be part of the new organization after the R&D AWS account has left the prior organization.
C. Create a new R&D AWS account in the new organization. Migrate resources from the prior R&D AWS account to the new R&D AWS account.
D. Have the R&D AWS account join the new organization. Make the new management account a member of the prior organization.
Correct Answer: C
https://repost.aws/knowledge-center/organizations-move-accounts
upvoted 1 times
Create a new AWS account dedicated for the business unit in the new organization
Migrate resources from the old account to the new account
Remove the old account from the original organization
This allows a clean break between the organizations and avoids any linking between them after separation.
upvoted 1 times
A company is designing a solution to capture customer activity in different web applications to process analytics and make predictions. Customer
activity in the web applications is unpredictable and can increase suddenly. The company requires a solution that integrates with other web
applications. The solution must include an authorization step for security purposes.
A. Configure a Gateway Load Balancer (GWLB) in front of an Amazon Elastic Container Service (Amazon ECS) container instance that stores
the information that the company receives in an Amazon Elastic File System (Amazon EFS) file system. Authorization is resolved at the GWLB.
B. Configure an Amazon API Gateway endpoint in front of an Amazon Kinesis data stream that stores the information that the company
receives in an Amazon S3 bucket. Use an AWS Lambda function to resolve authorization.
C. Configure an Amazon API Gateway endpoint in front of an Amazon Kinesis Data Firehose that stores the information that the company
receives in an Amazon S3 bucket. Use an API Gateway Lambda authorizer to resolve authorization.
D. Configure a Gateway Load Balancer (GWLB) in front of an Amazon Elastic Container Service (Amazon ECS) container instance that stores
the information that the company receives on an Amazon Elastic File System (Amazon EFS) file system. Use an AWS Lambda function to
resolve authorization.
Correct Answer: D
authorizer is configured for the method. If it is, API Gateway calls the Lambda function. The Lambda function authenticates the caller by
means such as the following: Calling out to an OAuth provider to get an OAuth access token
upvoted 1 times
An ecommerce company wants a disaster recovery solution for its Amazon RDS DB instances that run Microsoft SQL Server Enterprise Edition.
The company's current recovery point objective (RPO) and recovery time objective (RTO) are 24 hours.
A. Create a cross-Region read replica and promote the read replica to the primary instance.
B. Use AWS Database Migration Service (AWS DMS) to create RDS cross-Region replication.
C. Use cross-Region replication every 24 hours to copy native backups to an Amazon S3 bucket.
Correct Answer: B
The other solutions are more expensive because they require additional AWS services. For example, AWS DMS is a more expensive service
than AWS RDS.
upvoted 1 times
A company runs a web application on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer that has sticky
sessions enabled. The web server currently hosts the user session state. The company wants to ensure high availability and avoid user session
state loss in the event of a web server outage.
A. Use an Amazon ElastiCache for Memcached instance to store the session data. Update the application to use ElastiCache for Memcached
to store the session state.
B. Use Amazon ElastiCache for Redis to store the session state. Update the application to use ElastiCache for Redis to store the session
state.
C. Use an AWS Storage Gateway cached volume to store session data. Update the application to use AWS Storage Gateway cached volume to
store the session state.
D. Use Amazon RDS to store the session state. Update the application to use Amazon RDS to store the session state.
Correct Answer: D
ElastiCache Redis provides in-memory caching that can deliver microsecond latency for session data.
Redis supports replication and multi-AZ which can provide high availability for the cache.
The application can be updated to store session data in ElastiCache Redis rather than locally on the web servers.
If a web server fails, the user can be routed via the load balancer to another web server which can retrieve their session data from the
highly available ElastiCache Redis cluster.
upvoted 1 times
A company migrated a MySQL database from the company's on-premises data center to an Amazon RDS for MySQL DB instance. The company
sized the RDS DB instance to meet the company's average daily workload. Once a month, the database performs slowly when the company runs
queries for a report. The company wants to have the ability to run reports and maintain the performance of the daily workloads.
A. Create a read replica of the database. Direct the queries to the read replica.
B. Create a backup of the database. Restore the backup to another DB instance. Direct the queries to the new database.
C. Export the data to Amazon S3. Use Amazon Athena to query the S3 bucket.
Correct Answer: A
A company runs a container application by using Amazon Elastic Kubernetes Service (Amazon EKS). The application includes microservices that
manage customers and place orders. The company needs to route incoming requests to the appropriate microservices.
A. Use the AWS Load Balancer Controller to provision a Network Load Balancer.
B. Use the AWS Load Balancer Controller to provision an Application Load Balancer.
Correct Answer: C
A company uses AWS and sells access to copyrighted images. The company’s global customer base needs to be able to access these images
quickly. The company must deny access to users from specific countries. The company wants to minimize costs as much as possible.
A. Use Amazon S3 to store the images. Turn on multi-factor authentication (MFA) and public bucket access. Provide customers with a link to
the S3 bucket.
B. Use Amazon S3 to store the images. Create an IAM user for each customer. Add the users to a group that has permission to access the S3
bucket.
C. Use Amazon EC2 instances that are behind Application Load Balancers (ALBs) to store the images. Deploy the instances only in the
countries the company services. Provide customers with links to the ALBs for their specific country's instances.
D. Use Amazon S3 to store the images. Use Amazon CloudFront to distribute the images with geographic restrictions. Provide a signed URL
for each customer to access the data in CloudFront.
Correct Answer: C
A solutions architect is designing a highly available Amazon ElastiCache for Redis based solution. The solutions architect needs to ensure that
failures do not result in performance degradation or loss of data locally and within an AWS Region. The solution needs to provide high availability
at the node level and at the Region level.
A. Use Multi-AZ Redis replication groups with shards that contain multiple nodes.
B. Use Redis shards that contain multiple nodes with Redis append only files (AOF) turned on.
C. Use a Multi-AZ Redis cluster with more than one read replica in the replication group.
D. Use Redis shards that contain multiple nodes with Auto Scaling turned on.
Correct Answer: A
I would go with A, Using AOF can't protect you from all failure scenarios.
For example, if a node fails due to a hardware fault in an underlying physical server, ElastiCache will provision a new node on a different
server. In this case, the AOF is not available and can't be used to recover the data.
upvoted 1 times
A company plans to migrate to AWS and use Amazon EC2 On-Demand Instances for its application. During the migration testing phase, a technical
team observes that the application takes a long time to launch and load memory to become fully productive.
Which solution will reduce the launch time of the application during the next testing phase?
A. Launch two or more EC2 On-Demand Instances. Turn on auto scaling features and make the EC2 On-Demand Instances available during the
next testing phase.
B. Launch EC2 Spot Instances to support the application and to scale the application so it is available during the next testing phase.
C. Launch the EC2 On-Demand Instances with hibernation turned on. Configure EC2 Auto Scaling warm pools during the next testing phase.
D. Launch EC2 On-Demand Instances with Capacity Reservations. Start additional EC2 instances during the next testing phase.
Correct Answer: C
Hibernation saves the in-memory state of the EC2 instance to persistent storage and shuts the instance down. When the instance is
started again, the in-memory state is restored, which launches much faster than launching a new instance.
Warm pools pre-initialize EC2 instances and keep them ready to fulfill requests, reducing launch time. The hibernated instances can be
added to a warm pool.
When auto scaling scales out during the next testing phase, it will be able to launch instances from the warm pool rapidly since they are
already initialized
upvoted 2 times
A company's applications run on Amazon EC2 instances in Auto Scaling groups. The company notices that its applications experience sudden
traffic increases on random days of the week. The company wants to maintain application performance during sudden traffic increases.
A. Use manual scaling to change the size of the Auto Scaling group.
B. Use predictive scaling to change the size of the Auto Scaling group.
C. Use dynamic scaling to change the size of the Auto Scaling group.
D. Use schedule scaling to change the size of the Auto Scaling group.
Correct Answer: C
https://www.developer.com/web-services/aws-auto-scaling-types-best-
practices/#:~:text=Dynamic%20Scaling%20%E2%80%93%20This%20is%20yet,high%20volume%20of%20unpredictable%20traffic.
upvoted 2 times
Question #596 Topic 1
An ecommerce application uses a PostgreSQL database that runs on an Amazon EC2 instance. During a monthly sales event, database usage
increases and causes database connection issues for the application. The traffic is unpredictable for subsequent monthly sales events, which
impacts the sales forecast. The company needs to maintain performance when there is an unpredictable increase in traffic.
B. Enable auto scaling for the PostgreSQL database on the EC2 instance to accommodate increased usage.
C. Migrate the PostgreSQL database to Amazon RDS for PostgreSQL with a larger instance type.
Correct Answer: C
A company hosts an internal serverless application on AWS by using Amazon API Gateway and AWS Lambda. The company’s employees report
issues with high latency when they begin using the application each day. The company wants to reduce latency.
B. Set up a scheduled scaling to increase Lambda provisioned concurrency before employees begin to use the application each day.
C. Create an Amazon CloudWatch alarm to initiate a Lambda function as a target for the alarm at the beginning of each day.
Correct Answer: B
A research company uses on-premises devices to generate data for analysis. The company wants to use the AWS Cloud to analyze the data. The
devices generate .csv files and support writing the data to an SMB file share. Company analysts must be able to use SQL commands to query the
data. The analysts will run queries periodically throughout the day.
Which combination of steps will meet these requirements MOST cost-effectively? (Choose three.)
B. Deploy an AWS Storage Gateway on premises in Amazon FSx File Gateway made.
C. Set up an AWS Glue crawler to create a table based on the data that is in Amazon S3.
D. Set up an Amazon EMR cluster with EMR File System (EMRFS) to query the data that is in Amazon S3. Provide access to analysts.
E. Set up an Amazon Redshift cluster to query the data that is in Amazon S3. Provide access to analysts.
F. Setup Amazon Athena to query the data that is in Amazon S3. Provide access to analysts.
A company wants to use Amazon Elastic Container Service (Amazon ECS) clusters and Amazon RDS DB instances to build and run a payment
processing application. The company will run the application in its on-premises data center for compliance purposes.
A solutions architect wants to use AWS Outposts as part of the solution. The solutions architect is working with the company's operational team
to build the application.
Which activities are the responsibility of the company's operational team? (Choose three.)
B. Managing the virtualization hypervisor, storage systems, and the AWS services that run on Outposts
D. Availability of the Outposts infrastructure including the power supplies, servers, and networking equipment within the Outposts racks
F. Providing extra capacity for Amazon ECS clusters to mitigate server failures and maintenance events
https://aws.amazon.com/outposts/servers/faqs/
upvoted 1 times
neosis91 3 weeks, 1 day ago
Selected Answer: ACD
ACD
According to the AWS Shared Responsibility Model
2
, AWS operates, manages, and controls the components from the host operating system and virtualization layer down to the physical
security of the facilities in which the service operates. However, the customer is responsible for the physical security and access controls of
the data center environment, providing resilient power and network connectivity to the Outposts racks, and ensuring the availability of the
Outposts infrastructure including the power supplies, servers, and networking equipment within the Outposts racks.
Therefore, the company's operational team is responsible for providing the necessary infrastructure and security measures to support the
Outposts racks and ensure the availability of the Outposts infrastructure.
upvoted 3 times
https://docs.aws.amazon.com/whitepapers/latest/aws-outposts-high-availability-design/aws-outposts-high-availability-design.html
upvoted 2 times
A company is planning to migrate a TCP-based application into the company's VPC. The application is publicly accessible on a nonstandard TCP
port through a hardware appliance in the company's data center. This public endpoint can process up to 3 million requests per second with low
latency. The company requires the same level of performance for the new public endpoint in AWS.
A. Deploy a Network Load Balancer (NLB). Configure the NLB to be publicly accessible over the TCP port that the application requires.
B. Deploy an Application Load Balancer (ALB). Configure the ALB to be publicly accessible over the TCP port that the application requires.
C. Deploy an Amazon CloudFront distribution that listens on the TCP port that the application requires. Use an Application Load Balancer as
the origin.
D. Deploy an Amazon API Gateway API that is configured with the TCP port that the application requires. Configure AWS Lambda functions
with provisioned concurrency to process the requests.
Correct Answer: A
A Network Load Balancer functions at the fourth layer of the Open Systems Interconnection (OSI) model. It can handle millions of requests
per second. After the load balancer receives a connection request, it selects a target from the target group for the default rule. It attempts
to open a TCP connection to the selected target on the port specified in the listener configuration.
Link;
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html
upvoted 1 times
A company runs its critical database on an Amazon RDS for PostgreSQL DB instance. The company wants to migrate to Amazon Aurora
PostgreSQL with minimal downtime and data loss.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create a DB snapshot of the RDS for PostgreSQL DB instance to populate a new Aurora PostgreSQL DB cluster.
B. Create an Aurora read replica of the RDS for PostgreSQL DB instance. Promote the Aurora read replicate to a new Aurora PostgreSQL DB
cluster.
C. Use data import from Amazon S3 to migrate the database to an Aurora PostgreSQL DB cluster.
D. Use the pg_dump utility to back up the RDS for PostgreSQL database. Restore the backup to a new Aurora PostgreSQL DB cluster.
Correct Answer: B
There are five options for migrating data from your existing Amazon RDS for PostgreSQL database to an Amazon Aurora PostgreSQL-
Compatible DB cluster.
1-Using a snapshot
2-Using an Aurora read replica
3-Using a pg_dump utility
4-Using logical replication
5-Using a data import from Amazon S3
Aurora read replicas allow setting up replication from RDS PostgreSQL to Aurora PostgreSQL with minimal downtime.
Once replication is set up, the read replica can be promoted to a full standalone Aurora DB cluster with little to no downtime.
This approach leverages AWS's managed replication between the source RDS PostgreSQL instance and Aurora. It avoids having to
manually create backups and restore data.
Using DB snapshots or pg_dump backups requires manually restoring data which increases downtime and operational overhead.
Data import from S3 would require exporting, uploading and then importing data which adds overhead.
upvoted 2 times
A company's infrastructure consists of hundreds of Amazon EC2 instances that use Amazon Elastic Block Store (Amazon EBS) storage. A
solutions architect must ensure that every EC2 instance can be recovered after a disaster.
What should the solutions architect do to meet this requirement with the LEAST amount of effort?
A. Take a snapshot of the EBS storage that is attached to each EC2 instance. Create an AWS CloudFormation template to launch new EC2
instances from the EBS storage.
B. Take a snapshot of the EBS storage that is attached to each EC2 instance. Use AWS Elastic Beanstalk to set the environment based on the
EC2 template and attach the EBS storage.
C. Use AWS Backup to set up a backup plan for the entire group of EC2 instances. Use the AWS Backup API or the AWS CLI to speed up the
restore process for multiple EC2 instances.
D. Create an AWS Lambda function to take a snapshot of the EBS storage that is attached to each EC2 instance and copy the Amazon Machine
Images (AMIs). Create another Lambda function to perform the restores with the copied AMIs and attach the EBS storage.
Correct Answer: C
AWS Backup automates backup of resources like EBS volumes. It allows defining backup policies for groups of resources. This removes the
need to manually create backups for each resource.
The AWS Backup API and CLI allow programmatic control of backup plans and restores. This enables restoring hundreds of EC2 instances
programmatically after a disaster instead of manually.
AWS Backup handles cleanup of old backups based on policies to minimize storage costs.
upvoted 1 times
A company recently migrated to the AWS Cloud. The company wants a serverless solution for large-scale parallel on-demand processing of a
semistructured dataset. The data consists of logs, media files, sales transactions, and IoT sensor data that is stored in Amazon S3. The company
wants the solution to process thousands of items in the dataset in parallel.
Which solution will meet these requirements with the MOST operational efficiency?
A. Use the AWS Step Functions Map state in Inline mode to process the data in parallel.
B. Use the AWS Step Functions Map state in Distributed mode to process the data in parallel.
Correct Answer: B
https://docs.aws.amazon.com/step-functions/latest/dg/concepts-inline-vs-distributed-map.html
upvoted 1 times
Question #604 Topic 1
A company will migrate 10 PB of data to Amazon S3 in 6 weeks. The current data center has a 500 Mbps uplink to the internet. Other on-premises
applications share the uplink. The company can use 80% of the internet bandwidth for this one-time migration task.
A. Configure AWS DataSync to migrate the data to Amazon S3 and to automatically verify the data.
C. Use the AWS CLI and multiple copy processes to send the data directly to Amazon S3.
D. Order multiple AWS Snowball devices. Copy the data to the devices. Send the devices to AWS to copy the data to Amazon S3.
Correct Answer: A
A company has several on-premises Internet Small Computer Systems Interface (ISCSI) network storage servers. The company wants to reduce
the number of these servers by moving to the AWS Cloud. A solutions architect must provide low-latency access to frequently used data and
reduce the dependency on on-premises servers with a minimal number of infrastructure changes.
B. Deploy Amazon Elastic Block Store (Amazon EBS) storage with backups to Amazon S3.
C. Deploy an AWS Storage Gateway volume gateway that is configured with stored volumes.
D. Deploy an AWS Storage Gateway volume gateway that is configured with cached volumes.
Correct Answer: C
The Storage Gateway volume gateway provides iSCSI block storage using cached volumes. This allows replacing the on-premises iSCSI
servers with minimal changes.
Cached volumes store frequently accessed data locally for low latency access, while storing less frequently accessed data in S3.
This reduces the number of on-premises servers while still providing low latency access to hot data.
EBS does not provide iSCSI support to replace the existing servers.
S3 File Gateway is for file storage, not block storage.
Stored volumes would store all data on-premises, not in S3.
upvoted 2 times
When you configure an AWS Storage Gateway volume gateway with cached volumes, the gateway stores a copy of frequently accessed
data locally. This allows you to provide low-latency access to your frequently accessed data while reducing your dependency on on-
premises servers.
upvoted 2 times
Question #606 Topic 1
A solutions architect is designing an application that will allow business users to upload objects to Amazon S3. The solution needs to maximize
object durability. Objects also must be readily available at any time and for any length of time. Users will access objects frequently within the first
30 days after the objects are uploaded, but users are much less likely to access objects that are older than 30 days.
A. Store all the objects in S3 Standard with an S3 Lifecycle rule to transition the objects to S3 Glacier after 30 days.
B. Store all the objects in S3 Standard with an S3 Lifecycle rule to transition the objects to S3 Standard-Infrequent Access (S3 Standard-IA)
after 30 days.
C. Store all the objects in S3 Standard with an S3 Lifecycle rule to transition the objects to S3 One Zone-Infrequent Access (S3 One Zone-IA)
after 30 days.
D. Store all the objects in S3 Intelligent-Tiering with an S3 Lifecycle rule to transition the objects to S3 Standard-Infrequent Access (S3
Standard-IA) after 30 days.
Correct Answer: B
Before you transition objects to S3 Standard-IA or S3 One Zone-IA, you must store them for at least 30 days in Amazon S3. For example,
you cannot create a Lifecycle rule to transition objects to the S3 Standard-IA storage class one day after you create them. Amazon S3
doesn't support this transition within the first 30 days because newer objects are often accessed more frequently or deleted sooner than
is suitable for S3 Standard-IA or S3 One Zone-IA storage.
Similarly, if you are transitioning noncurrent objects (in versioned buckets), you can transition only objects that are at least 30 days
noncurrent to S3 Standard-IA or S3 One Zone-IA storage.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-transition-general-considerations.html
upvoted 1 times
A company has migrated a two-tier application from its on-premises data center to the AWS Cloud. The data tier is a Multi-AZ deployment of
Amazon RDS for Oracle with 12 TB of General Purpose SSD Amazon Elastic Block Store (Amazon EBS) storage. The application is designed to
process and store documents in the database as binary large objects (blobs) with an average document size of 6 MB.
The database size has grown over time, reducing the performance and increasing the cost of storage. The company must improve the database
performance and needs a solution that is highly available and resilient.
A. Reduce the RDS DB instance size. Increase the storage capacity to 24 TiB. Change the storage type to Magnetic.
B. Increase the RDS DB instance size. Increase the storage capacity to 24 TiChange the storage type to Provisioned IOPS.
C. Create an Amazon S3 bucket. Update the application to store documents in the S3 bucket. Store the object metadata in the existing
database.
D. Create an Amazon DynamoDB table. Update the application to use DynamoDB. Use AWS Database Migration Service (AWS DMS) to migrate
data from the Oracle database to DynamoDB.
Correct Answer: C
A company has an application that serves clients that are deployed in more than 20.000 retail storefront locations around the world. The
application consists of backend web services that are exposed over HTTPS on port 443. The application is hosted on Amazon EC2 instances
behind an Application Load Balancer (ALB). The retail locations communicate with the web application over the public internet. The company
allows each retail location to register the IP address that the retail location has been allocated by its local ISP.
The company's security team recommends to increase the security of the application endpoint by restricting access to only the IP addresses
registered by the retail locations.
A. Associate an AWS WAF web ACL with the ALB. Use IP rule sets on the ALB to filter traffic. Update the IP addresses in the rule to include the
registered IP addresses.
B. Deploy AWS Firewall Manager to manage the ALConfigure firewall rules to restrict traffic to the ALModify the firewall rules to include the
registered IP addresses.
C. Store the IP addresses in an Amazon DynamoDB table. Configure an AWS Lambda authorization function on the ALB to validate that
incoming requests are from the registered IP addresses.
D. Configure the network ACL on the subnet that contains the public interface of the ALB. Update the ingress rules on the network ACL with
entries for each of the registered IP addresses.
Correct Answer: A
A company is building a data analysis platform on AWS by using AWS Lake Formation. The platform will ingest data from different sources such
as Amazon S3 and Amazon RDS. The company needs a secure solution to prevent access to portions of the data that contain sensitive
information.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create an IAM role that includes permissions to access Lake Formation tables.
C. Create an AWS Lambda function that removes sensitive information before Lake Formation ingests the data.
D. Create an AWS Lambda function that periodically queries and removes sensitive information from Lake Formation tables.
Correct Answer: C
Lake Formation data filters allow restricting access to rows or cells in data tables based on conditions. This allows preventing access to
sensitive data.
Data filters are implemented within Lake Formation and do not require additional coding or Lambda functions.
Lambda functions to pre-process data or purge tables would require ongoing development and maintenance.
IAM roles only provide user-level permissions, not row or cell level security.
Data filters give granular access control over Lake Formation data with minimal configuration, avoiding complex custom code.
upvoted 2 times
Data filters are a feature of Lake Formation that allow you to restrict access to data based on row and column values. This can be used to
implement row-level security and cell-level security.
To implement row-level security, you would create a data filter that only allows users to access rows where the values in certain columns
meet certain criteria. For example, you could create a data filter that only allows users to access rows where the value in the customer_id
column matches the user's own customer ID.
upvoted 1 times
Question #610 Topic 1
A company deploys Amazon EC2 instances that run in a VPC. The EC2 instances load source data into Amazon S3 buckets so that the data can be
processed in the future. According to compliance laws, the data must not be transmitted over the public internet. Servers in the company's on-
premises data center will consume the output from an application that runs on the EC2 instances.
A. Deploy an interface VPC endpoint for Amazon EC2. Create an AWS Site-to-Site VPN connection between the company and the VPC.
B. Deploy a gateway VPC endpoint for Amazon S3. Set up an AWS Direct Connect connection between the on-premises network and the VPC.
C. Set up an AWS Transit Gateway connection from the VPC to the S3 buckets. Create an AWS Site-to-Site VPN connection between the
company and the VPC.
D. Set up proxy EC2 instances that have routes to NAT gateways. Configure the proxy EC2 instances to fetch S3 data and feed the application
instances.
Correct Answer: B
A company has an application with a REST-based interface that allows data to be received in near-real time from a third-party vendor. Once
received, the application processes and stores the data for further analysis. The application is running on Amazon EC2 instances.
The third-party vendor has received many 503 Service Unavailable Errors when sending data to the application. When the data volume spikes, the
compute capacity reaches its maximum limit and the application is unable to process all requests.
Which design should a solutions architect recommend to provide a more scalable solution?
A. Use Amazon Kinesis Data Streams to ingest the data. Process the data using AWS Lambda functions.
B. Use Amazon API Gateway on top of the existing application. Create a usage plan with a quota limit for the third-party vendor.
C. Use Amazon Simple Notification Service (Amazon SNS) to ingest the data. Put the EC2 instances in an Auto Scaling group behind an
Application Load Balancer.
D. Repackage the application as a container. Deploy the application using Amazon Elastic Container Service (Amazon ECS) using the EC2
launch type with an Auto Scaling group.
Correct Answer: A
Kinesis Data Streams provides an auto-scaling stream that can handle large amounts of streaming data ingestion and throughput. This
removes the bottlenecks around receiving the data.
AWS Lambda can process and store the data in a scalable serverless manner, avoiding EC2 capacity limits.
API Gateway adds API management capabilities but does not improve the underlying scalability of the EC2 application.
SNS is for event publishing/notifications, not large scale data ingestion. ECS still relies on EC2 capacity.
upvoted 1 times
A company has an application that runs on Amazon EC2 instances in a private subnet. The application needs to process sensitive information
from an Amazon S3 bucket. The application must not use the internet to connect to the S3 bucket.
A. Configure an internet gateway. Update the S3 bucket policy to allow access from the internet gateway. Update the application to use the
new internet gateway.
B. Configure a VPN connection. Update the S3 bucket policy to allow access from the VPN connection. Update the application to use the new
VPN connection.
C. Configure a NAT gateway. Update the S3 bucket policy to allow access from the NAT gateway. Update the application to use the new NAT
gateway.
D. Configure a VPC endpoint. Update the S3 bucket policy to allow access from the VPC endpoint. Update the application to use the new VPC
endpoint.
Correct Answer: A
VPC endpoints allow private connectivity from VPCs to AWS services like S3 without using an internet gateway.
The application can connect to S3 through the VPC endpoint while remaining in the private subnet, without internet access.
upvoted 1 times
Option A (internet gateway) would involve exposing the S3 bucket to the internet, which is not recommended for security reasons.
Option B (VPN connection) would require additional setup and would still involve traffic going over the internet.
Option C (NAT gateway) is used for outbound internet access from private subnets, not for accessing S3 without the internet.
upvoted 2 times
Question #613 Topic 1
A company uses Amazon Elastic Kubernetes Service (Amazon EKS) to run a container application. The EKS cluster stores sensitive information in
the Kubernetes secrets object. The company wants to ensure that the information is encrypted.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use the container application to encrypt the information by using AWS Key Management Service (AWS KMS).
B. Enable secrets encryption in the EKS cluster by using AWS Key Management Service (AWS KMS).
C. Implement an AWS Lambda function to encrypt the information by using AWS Key Management Service (AWS KMS).
D. Use AWS Systems Manager Parameter Store to encrypt the information by using AWS Key Management Service (AWS KMS).
Correct Answer: B
When you enable secrets encryption in the EKS cluster, AWS KMS encrypts the secrets before they are stored in the EKS cluster. You do not
need to make any changes to your container application or implement any additional Lambda functions.
upvoted 1 times
Question #614 Topic 1
A company is designing a new multi-tier web application that consists of the following components:
• Web and application servers that run on Amazon EC2 instances as part of Auto Scaling groups
• An Amazon RDS DB instance for data storage
A solutions architect needs to limit access to the application servers so that only the web servers can access them.
A. Deploy AWS PrivateLink in front of the application servers. Configure the network ACL to allow only the web servers to access the
application servers.
B. Deploy a VPC endpoint in front of the application servers. Configure the security group to allow only the web servers to access the
application servers.
C. Deploy a Network Load Balancer with a target group that contains the application servers' Auto Scaling group. Configure the network ACL to
allow only the web servers to access the application servers.
D. Deploy an Application Load Balancer with a target group that contains the application servers' Auto Scaling group. Configure the security
group to allow only the web servers to access the application servers.
Correct Answer: A
An Application Load Balancer (ALB) allows directing traffic to the application servers and provides access control via security groups.
Security groups act as a firewall at the instance level and can control access to the application servers from the web servers.
Network ACLs work at the subnet level and are less flexible for security groups for instance-level access control.
VPC endpoints are used to provide private access to AWS services, not for access between EC2 instances.
AWS PrivateLink provides private connectivity between VPCs, which is not required in this single VPC scenario.
upvoted 3 times
Option A: AWS PrivateLink is a service that allows you to connect your VPC to private services that are owned by AWS or by other AWS
customers. It is not designed to be used to limit access to resources within the same VPC.
Option C: A Network Load Balancer can be used to distribute traffic across multiple application servers, but it does not provide a way to
limit access to the application servers.
Option D: An Application Load Balancer can be used to distribute traffic across multiple application servers, but it does not provide a way
to limit access to the application servers.
upvoted 2 times
Question #615 Topic 1
A company runs a critical, customer-facing application on Amazon Elastic Kubernetes Service (Amazon EKS). The application has a microservices
architecture. The company needs to implement a solution that collects, aggregates, and summarizes metrics and logs from the application in a
centralized location.
A. Run the Amazon CloudWatch agent in the existing EKS cluster. View the metrics and logs in the CloudWatch console.
B. Run AWS App Mesh in the existing EKS cluster. View the metrics and logs in the App Mesh console.
C. Configure AWS CloudTrail to capture data events. Query CloudTrail by using Amazon OpenSearch Service.
D. Configure Amazon CloudWatch Container Insights in the existing EKS cluster. View the metrics and logs in the CloudWatch console.
Correct Answer: C
CloudWatch Container Insights automatically collects metrics and logs from containers running in EKS clusters. This provides visibility into
resource utilization, application performance, and microservice interactions.
The metrics and logs are stored in CloudWatch Logs and CloudWatch metrics for central access.
The CloudWatch console allows querying, filtering, and visualizing the metrics and logs in one centralized place.
upvoted 1 times
Amazon CloudWatch Application Insights facilitates observability for your applications and underlying AWS resources. It helps you set up
the best monitors for your application resources to continuously analyze data for signs of problems with your applications.
upvoted 2 times
A company has deployed its newest product on AWS. The product runs in an Auto Scaling group behind a Network Load Balancer. The company
stores the product’s objects in an Amazon S3 bucket.
The company recently experienced malicious attacks against its systems. The company needs a solution that continuously monitors for malicious
activity in the AWS account, workloads, and access patterns to the S3 bucket. The solution must also report suspicious activity and display the
information on a dashboard.
C. Configure Amazon GuardDuty to monitor and report findings to AWS Security Hub.
Correct Answer: A
Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior. It analyzes
AWS CloudTrail, VPC Flow Logs, and DNS logs.
GuardDuty can detect threats like instance or S3 bucket compromise, malicious IP addresses, or unusual API calls.
Findings can be sent to AWS Security Hub which provides a centralized security dashboard and alerts.
Amazon Macie and Amazon Inspector do not monitor the breadth of activity that GuardDuty does. They focus more on data security and
application vulnerabilities respectively.
AWS Config monitors for resource configuration changes, not malicious activity.
upvoted 2 times
Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior. It
analyzes AWS CloudTrail, VPC Flow Logs, and DNS logs.
GuardDuty can detect threats like instance or S3 bucket compromise, malicious IP addresses, or unusual API calls.
Findings can be sent to AWS Security Hub which provides a centralized security dashboard and alerts.
Amazon Macie and Amazon Inspector do not monitor the breadth of activity that GuardDuty does. They focus more on data security
and application vulnerabilities respectively.
AWS Config monitors for resource configuration changes, not malicious activity.
upvoted 2 times
A company wants to migrate an on-premises data center to AWS. The data center hosts a storage server that stores data in an NFS-based file
system. The storage server holds 200 GB of data. The company needs to migrate the data without interruption to existing services. Multiple
resources in AWS must be able to access the data by using the NFS protocol.
Which combination of steps will meet these requirements MOST cost-effectively? (Choose two.)
D. Manually use an operating system copy command to push the data into the AWS destination.
E. Install an AWS DataSync agent in the on-premises data center. Use a DataSync task between the on-premises location and AWS.
Correct Answer: AB
A company wants to use Amazon FSx for Windows File Server for its Amazon EC2 instances that have an SMB file share mounted as a volume in
the us-east-1 Region. The company has a recovery point objective (RPO) of 5 minutes for planned system maintenance or unplanned service
disruptions. The company needs to replicate the file system to the us-west-2 Region. The replicated data must not be deleted by any user for 5
years.
A. Create an FSx for Windows File Server file system in us-east-1 that has a Single-AZ 2 deployment type. Use AWS Backup to create a daily
backup plan that includes a backup rule that copies the backup to us-west-2. Configure AWS Backup Vault Lock in compliance mode for a
target vault in us-west-2. Configure a minimum duration of 5 years.
B. Create an FSx for Windows File Server file system in us-east-1 that has a Multi-AZ deployment type. Use AWS Backup to create a daily
backup plan that includes a backup rule that copies the backup to us-west-2. Configure AWS Backup Vault Lock in governance mode for a
target vault in us-west-2. Configure a minimum duration of 5 years.
C. Create an FSx for Windows File Server file system in us-east-1 that has a Multi-AZ deployment type. Use AWS Backup to create a daily
backup plan that includes a backup rule that copies the backup to us-west-2. Configure AWS Backup Vault Lock in compliance mode for a
target vault in us-west-2. Configure a minimum duration of 5 years.
D. Create an FSx for Windows File Server file system in us-east-1 that has a Single-AZ 2 deployment type. Use AWS Backup to create a daily
backup plan that includes a backup rule that copies the backup to us-west-2. Configure AWS Backup Vault Lock in governance mode for a
target vault in us-west-2. Configure a minimum duration of 5 years.
Correct Answer: C
A solutions architect is designing a security solution for a company that wants to provide developers with individual AWS accounts through AWS
Organizations, while also maintaining standard security controls. Because the individual developers will have AWS account root user-level access
to their own accounts, the solutions architect wants to ensure that the mandatory AWS CloudTrail configuration that is applied to new developer
accounts is not modified.
A. Create an IAM policy that prohibits changes to CloudTrail. and attach it to the root user.
B. Create a new trail in CloudTrail from within the developer accounts with the organization trails option enabled.
C. Create a service control policy (SCP) that prohibits changes to CloudTrail, and attach it the developer accounts.
D. Create a service-linked role for CloudTrail with a policy condition that allows changes only from an Amazon Resource Name (ARN) in the
management account.
Correct Answer: C
A company is planning to deploy a business-critical application in the AWS Cloud. The application requires durable storage with consistent, low-
latency performance.
Which type of storage should a solutions architect recommend to meet these requirements?
C. Provisioned IOPS SSD Amazon Elastic Block Store (Amazon EBS) volume
D. Throughput Optimized HDD Amazon Elastic Block Store (Amazon EBS) volume
Correct Answer: C
An online photo-sharing company stores its photos in an Amazon S3 bucket that exists in the us-west-1 Region. The company needs to store a
copy of all new photos in the us-east-1 Region.
Which solution will meet this requirement with the LEAST operational effort?
A. Create a second S3 bucket in us-east-1. Use S3 Cross-Region Replication to copy photos from the existing S3 bucket to the second S3
bucket.
B. Create a cross-origin resource sharing (CORS) configuration of the existing S3 bucket. Specify us-east-1 in the CORS rule's AllowedOrigin
element.
C. Create a second S3 bucket in us-east-1 across multiple Availability Zones. Create an S3 Lifecycle rule to save photos into the second S3
bucket.
D. Create a second S3 bucket in us-east-1. Configure S3 event notifications on object creation and update events to invoke an AWS Lambda
function to copy photos from the existing S3 bucket to the second S3 bucket.
Correct Answer: A
A company is creating a new web application for its subscribers. The application will consist of a static single page and a persistent database
layer. The application will have millions of users for 4 hours in the morning, but the application will have only a few thousand users during the rest
of the day. The company's data architects have requested the ability to rapidly evolve their schema.
Which solutions will meet these requirements and provide the MOST scalability? (Choose two.)
B. Deploy Amazon Aurora as the database solution. Choose the serverless DB engine mode.
C. Deploy Amazon DynamoDB as the database solution. Ensure that DynamoDB auto scaling is enabled.
D. Deploy the static content into an Amazon S3 bucket. Provision an Amazon CloudFront distribution with the S3 bucket as the origin.
E. Deploy the web servers for static content across a fleet of Amazon EC2 instances in Auto Scaling groups. Configure the instances to
periodically refresh the content from an Amazon Elastic File System (Amazon EFS) volume.
Correct Answer: CD
DynamoDB auto scaling allows the database to scale up and down dynamically based on traffic patterns. This handles the large spike in
traffic in the mornings and lower traffic later in the day.
S3 combined with CloudFront provides a highly scalable infrastructure for the static content. CloudFront caching improves performance.
Aurora serverless could be an option but may not scale as seamlessly as DynamoDB to the very high spike in users.
EC2 Auto Scaling groups add complexity compared to S3/CloudFront for static content hosting.
upvoted 1 times
A company uses Amazon API Gateway to manage its REST APIs that third-party service providers access. The company must protect the REST
APIs from SQL injection and cross-site scripting attacks.
What is the MOST operationally efficient solution that meets these requirements?
C. Set up API Gateway with an Amazon CloudFront distribution. Configure AWS Shield in CloudFront.
D. Set up API Gateway with an Amazon CloudFront distribution. Configure AWS WAF in CloudFront.
Correct Answer: A