AWS Academy Cloud Foundations Extended Notes Modules 1 10 PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 73

AWS Academy Cloud Foundations (Canvas)

TABLE OF CONTENTS (CTRL + Click to jump to section)

• Module 1: Cloud Concepts Overview


• Section 1: Introduction to cloud computing
• Section 2: Advantages to cloud computing
• Section 3: Introduction to AWS
• Section 4: AWS Cloud Adoption Framework (AWS CAF)
• Module 2: Cloud Economics and Billing
• Section 1: Fundamentals of pricing
• Section 2: Total cost of ownership
• Section 3: Billing
• Section 4: Technical support
• Module 3: AWS Global Infrastructure Overview
• Section 1: AWS global infrastructure
• Section 2: AWS services and service category overview
• Module 4: AWS Cloud Security
• Section 1: AWS shared responsibility model
• Section 2: AWS identity and Access Management (IAM)
• Section 3: Securing a new AWS account
• Section 4: Securing accounts
• Section 5: Securing data on AWS
• Section 6: Working to ensure compliance
• Module 5: Networking and Content Delivery
• Section 1: Networking Basics
• Section 2: Amazon VPC
• Section 3: VPC networking
• Section 4: VPC security
• Section 5: Amazon Route 53
• Section 6: Amazon CloudFront
• Module 6: Compute
• Section 1: Compute services overview
• Section 2: Amazon EC2
• Section 3: Amazon EC2 cost optimization
• Section 4: Container services
• Section 5: Introduction to AWS Lambda
• Section 6: Introduction to AWS Elastic Beanstalk
• Module 7: Storage
• Section 1: Amazon Elastic Block (EBS)
• Section 2: Amazon Simple Storage Service (Amazon S3)
• Section 3: Amazon Elastic File System (Amazon EFS)
• Section 4: Amazon S3 Glacier
• Module 8: Databases
• Section 1: Amazon Relational Database Service
• Section 2: Amazon DynamoDB
• Section 3: Amazon Redshift
• Section 4: Amazon Aurora
• Module 9: Cloud Architecture
• Section 1: AWS Well-Architected Framework
• Section 2: Reliability and availability
• Section 3: AWS Trusted Advisor
• Module 10: Auto Scaling and Monitoring
• Section 1: Elastic Load Balancing
• Section 2: Amazon CloudWatch
• Section 3: Amazon EC2 Auto Scaling

Module 1: Cloud Concepts Overview


Section 1: Introduction to cloud computing

• Cloud computing is the on-demand delivery of compute power, database, storage, applications, and
other IT resources via the internet with pay-as-you-go pricing.
• Cloud computing enables you to stop thinking of your infrastructure as hardware, and instead
think of (and use) it as software.

Traditional computing model:


• Infrastructure as hardware
• Hardware solutions:
• Require space, staff, physical security, planning, capital expenditure
• Have a long hardware procurement cycle
• Require you to provision capacity by guessing theoretical maximum peaks

Cloud computing model:


• Infrastructure as software
• Software solutions:
• Are flexible
• Can change more quickly, easily, and cost-effectively than hardware solutions
• Eliminate the undifferentiated heavy-lifting tasks

Cloud service models:


• IaaS – Infrastructure as a service (highest control)
• PaaS – platform as a service (less control)
• SaaS – software as a service (even less control)
Cloud computing deployment models:
• Cloud
• Hybrid
• On-premises (private cloud)

Security:
• Security groups
• Network ACLs
• IAM (auth)
Networking:
• Elastic Balancing
• Amazon VPC
Compute:
• AMI (Amazon Machine Images)
• Amazon EC2 instances
Storage and database:
• Amazon EBS
• Amazon EFS
• Amazon S3
• Amazon RDS

Section 2: Advantages to cloud computing

Advantage 1: Trade capital expense for variable expense


• Data center investment based on forecast
• Pay only for the amount you consume

Advantage 2: Massive economies of sale


• Because of aggregate usage from all customers, AWS can achieve higher economies of scale and
pass savings on to customers.

Advantage 3: Stop guessing capacity


• Overestimated server capacity vs Underestimated server capacity
• Scaling on demand

Advantage 4: Increase speed and agility


• Weeks between wanting resources and having resources
• vs
• Minutes between wanting resources and having minutes

Advantage 5: Stop spending money on running and maintaining data centers

Advantage 6: Go global in minutes


• Deploy in multiple regions instantly

Section 3: Introduction to AWS

What are web services?


• A web service is any piece of software that makes itself available over the internet and uses a
standardized format – such as Extensible Markup Language (XML) or JavaScript Object Notation
(JSON) – for the request and the response of an application programming interface (API)
interaction.

What is AWS?
• AWS is a secure cloud platform that offers a broad set of global cloud-based products.
• AWS provides you with on-demand access to compute, storage, network, database, and other IT
resources and management tools.
• AWS offers flexibility.
• You pay only for the individual services you need, for as long as you use them.
• AWS services work together like building blocks.
• The service you select depends on your business goals and technology requirements.

Three ways to interact with AWS:


• AWS Management Console
• Command Line Interface (AWS CLI)
• Software Development Kits (SDKs)

Section 4: AWS Cloud Adoption Framework (AWS CAF)

• AWS CAF provides guidance and best practices to help organizations build a comprehensive
approach to cloud computing across the organization and throughout the IT lifecycle to accelerate
successful cloud adoption.
• AWS CAF is organized into six perspectives.
• Perspectives consist of sets of capabilities.

Business capabilities:
• Business
• We must ensure that IT is aligned with business needs, and that IT investments can be traced to
demonstratable business results.
• IT finance
• IT strategy
• Benefits realization
• Business risk management
• Business managers, finance managers, budget owners, and strategy stakeholders
• People
• We must prioritize training, staffing, and organizational changes to build an agile organization.
• Resource management
• Incentive management
• Career management
• Training management
• Organizational change management
• Human resources, staffing, and people managers.
• Governance
• We must ensure that skills and processes align IT strategy and goals with business strategy and
goals so the organization can maximize the business value of its IT investment and minimize
business risks.
• Portfolio management
• Program and project management
• Business performance measurement
• License management
• CIO, program managers, enterprise architects, business analysts, and portfolio managers

Technical capabilities:
• Platform
• We must understand and communicate the nature of IT systems and their relationships. We
must be able to describe the architecture of the target state environment in detail.
• Compute provisioning
• Network provisioning
• Storage provisioning
• Database provisioning
• Systems and solution architecture
• Application development
• CIO, IT managers, and solutions architects.
• Security
• We must ensure that the organization meets its security objectives.
• Identity and access management
• Detective control
• Infrastructure security
• Data protection
• Incident response
• CISO, IT security managers, and IT security analysts
• Operations
• We align with and support the operations of the business, and define how day-to-day, quarter-
to-quarter, and year-to-year business will be conducted.
• Service monitoring
• Application performance monitoring
• Resource inventory management
• Release management/change management
• Reporting and analytics
• Business continuity/Disaster recovery
• IT service catalog
• IT operations managers and IT support managers

Module 2: Cloud Economics and Billing

Section 1: Fundamentals of pricing

Three fundamental drivers of cost with AWS:


• Compute
• Charged per hour/second* (*Linux only)
• Varies by instance type
• Storage
• Charged typically per GB
• Data transfer
• Outbound is aggregated and charged
• Inbound has no charge (with some exceptions)
• Charged typically by GB

How do you pay for AWS?


• Pay for what you use
• Pay only for the services that you consume, with no large upfront expenses.
• Pay less when you reserve
• Invested in Reserved Instances (RIs):
• Save up to 75%
• Options
• All Upfront Reserved Instance (AURI) – largest discount
• Partial Upfront Reserved Instance (PURI) – lower discounts
• No Upfront Payments Reserved Instance (NURI) – smaller discount
• Pay less when you use more and as AWS grows
• Realize volume-based discounts:
• Savings as usage increases
• Tiered pricing for services like Amazon Simple Storage Service (Amazon S3), Amazon Elastic
Block Store (Amazon EBS), or Amazon Elastic File System (Amazon EFS) – the more you use,
the less you pay per GB.
• Multiple storage services deliver lower storage costs based on needs.
• Pay even less as AWS grows:
• AWS focuses on lowering cost of doing business.
• This practice results in AWS passing savings from economies of scale to you.
• Since 2006, AWS has lowered pricing 75 times (as of September 2019).
• Future higher-performing resources replace current resources for no extra charge.
Custom pricing:
• Meet varying needs through custom pricing
• Available for high-volume projects with unique requirements

AWS Free Tier:


• Enables you to gain free hands-on experience with the AWS platform, products, and services. Free
for 1 year for new customers.
1. Sign up for an AWS account
2. Learn with 10-minute tutorials
3. Start building with AWS

Services with no charge:


• Amazon VPC
• Enables you to provision a logically isolated section of the AWS Cloud where you launch AWS
resources in a virtual network that you define.
• Elastic Beanstalk
• An even easier way for you to quickly deploy and manage applications in the AWS Cloud.
• Auto Scaling
• Automatically adds or removes resources according to conditions you define. The resources you
are using increase seamlessly during demand spikes to maintain performance and decrease
automatically during demand lulls to minimize costs.
• AWS CloudFormation
• Gives developers and system administrators an easy way to create a collection of related AWS
resources and provisions them in an orderly and predictable fashion.
• AWS Identity and Access Management (IAM)
• Controls your users’ access to AWS services and resources.
• AWS OpsWorks
• Application management service that makes it easy to deploy and operate applications of all
shapes and sizes.

Key takeways:
• There is no charge for:
• Inbound data transfer
• Data transfer between services within the same AWS region.
• Pay for what you use.
• Start and stop anytime.
• No long-term contracts are required.
• Some services are free, but the other AWS services that they provision might not be free.

Section 2: Total Cost of Ownership

On-premises versus cloud


• Traditional Infrastructure
• Equipment
• Resources and administration
• Contracts
• Cost
• AWS Cloud
• No upfront expense – pay for what you use
• Improve time to market and agility
• Scale up and down
• Self-service infrastructure

What is Total Cost of Ownership (TCO)?


• Total Cost of Ownership (TCO) is the financial estimate to help identify direction and indirect costs
of a system.
• Why use TCO?
• To compare the costs of running an entire infrastructure environment or specific workload on-
premises versus on AWS.
• To budget and build the business case for moving to the cloud.

TCO considerations:
• Server Costs
• Hardware: Server, rack chassis power distribution units (PDUs), top-of-rack (TOR) switches (and
maintenance)
• Software: Operating system, virtualization licenses (and maintenance)
• Facilities cost: Space, power, cooling
• Storage Costs
• Hardware: Storage disks, storage area network (SAN) or Fiber Channel (FC) switches
• Storage administration costs
• Facilities cost: space, power, cooling
• Network Costs
• Network hardware: LAN switches, load balancer bandwidth costs
• Network administration costs
• Facilities cost: space, power, cooling
• It Labor Costs
• Server administration costs

On-premises versus all-in-cloud


• You could save up to 96 percent a year by moving your infrastructure to AWS.

AWS Simple Monthly Calculator


• Use the Simple Monthly Calculator to:
• Estimate monthly costs
• Identify opportunities to reduce monthly costs
• Use templates to compare services and deployment models
• http://calculator.s3.amazonaws.com/index.html

AWS TCO calculator


• Use the TCO Calculator to:
• Estimate cost savings
• Use detailed reports
• Modify assumptions

Hard benefits:
• Reduced spending on compute, storage, networking, security
• Reductions in hardware and software purchases (capex)
• Reductions in operational costs, backup, and disaster recovery
• Reduction in operations personnel

Soft Benefits:
• Reuse of service and applications that enable you to define (and redefine solutions) by using the
same cloud service.
• Increased developer productivity
• Improved customer satisfaction
• Agile business processes that can quickly respond to new and emerging opportunities
• Increase in global reach

Case study: Total Cost of Ownership


• Background
• Growing global company with over 200 locations
• 500 million customers, $3 billion annual revenue
• Challenge
• Meet demand to rapidly deploy new solutions
• Constantly upgrade aging equipment
• Criteria
• Broad solution to handle all workloads
• Ability to modify processes to improve efficiency and lower costs
• Eliminate busy work (such as patching software)
• Achieve a positive return on investment (ROI)
• Solution
• Moved their on-premises data center to AWS
• Eliminated 205 servers (90 percent)
• Moved nearly all applications to AWS
• Used 3-year Amazon EC2 Reserved Instances
• Results
• Resource Optimization
• Robust security compliance
• Enhanced disaster recovery
• Increased computing capacity
• Speed to market
• One day to provision new businesses
• Just minutes to push out a service
• Operational efficiency
• Continuous cost optimization and reduction
• Fulfills business goals:
• Growth
• Enhanced 24/7 business
• Operational efficiency

Section 3: Billing

• AWS Organizations is a free account management service that enables you to consolidate multiple
AWS accounts into an organization that you create and centrally manage. AWS Organizations
include consolidated billing and account management capabilities that help you to better meet the
budgetary, security, and compliance needs of your business.

Key features and benefits:


• Policy-based account management
• Group based account management
• Application programming interfaces (APIs) that automate account management
• Consolidated billing

Security with AWS Organizations:


• Control access with AWS Identity and Access Management (IAM)
• IAM policies enable you to allow or deny access to AWS services for users, groups, and roles.
• Service control policies (SCPs) enable you to allow or deny access to AWS services for individuals or
group accounts in an organizational unit (OU).

Organizations Setup:
1. Create Organization
2. Create organizational units
3. Create service control policies (SCP)
4. Test restrictions

Limits of AWS Organizations:


• Limits on names
• Names must be composed of Unicode characters
• Names must not exceed 250 characters in length
• Maximum and Minimum Values
• Number of AWS accounts: Varies.
• Number of roots: 1
• Number of OUs: 1,000
• Number of policies: 1,000
• Max size of a service control policy document: 5,120 bytes
• Max nesting of OUs in a root: 5 levels of OUs under a root
• Invitations sent per day: 20
• Number of member accounts you can create concurrently: Only five can be in progress at one
time
• Number of entities to which you can attach a policy: Unlimited

Accessing AWS Organizations


• AWS Management Console
• AWS Command Line Interface (AWS CLI) tools
• Software development kits (SDKs)
• HTTPS Query API

Introducing AWS Billing and Cost Management


• AWS Billing and Cost Management is the service that you use to pay your AWS bill, monitor your
usage, and budget your costs. It enables you to forecast and obtain a better idea of what your costs
and usage might be in the future so that you can plan ahead.

Tools
• AWS Budgets
• AWS Cost and Usage Report
• AWS Cost Explorer

Section 4: Technical support

• Provide unique combination of tools and expertise:


• AWS Support
• AWS Support Plans
• Support is provided for:
• Experimenting with AWS
• Production use of AWS
• Business-critical use of AWS

• Proactive guidance
• Technical Account Manager (TAM)
• Best practices
• AWS Trusted Advisor
• Account assistance
• AWS Support Concierge

AWS Support offers four support plans:


• Basic Support – Resource Center access, Service Health Dashboard, product FAQs, discussion
forums, and support for health checks
• Developer Support – Support for early development on AWS
• Business Support – Customers that run production workloads
• Enterprise Support – Customers that run business and mission-critical workloads

Module 3: AWS Global Infrastructure Overview

Section 1: AWS Global Infrastructure

• The AWS Global Infrastructure is designed and built to deliver a flexible, reliable, scalable, and
secure cloud computing environment with high-quality global network performance.
• This map from https://infrastructure.aws shows the current AWS Regions and more that are coming
soon.

AWS Regions:
• An AWS Region is a geographical area.
• Data replication across Regions is controlled by you.
• Communication between Regions uses AWS backbone network infrastructure.
• Each Region provides full redundancy and connectivity to the network.
• A Region typically consists of two or more Availability Zones.

Selecting a Region:
• Determine the right Region for your services, applications, and data based on these factors.
• Data governance, legal requirements
• Proximity to customers (latency)
• Services available within the Region
• Costs (vary by Region)

Availability Zones
• Each Region has multiple Availability Zones
• Each Availability Zone is a fully isolated partition of the AWS infrastructure
• There are currently 69 Availability Zones worldwide
• Availability Zones consist of discrete data centers
• They are design for fault isolation
• They are interconnected with other Availability Zones by using high-speed private networking
• You choose your Availability Zones
• AWS recommends replicating data and resources across Availability Zones for resiliency.
AWS data centers
• AWS data centers are designed for security.
• Data centers are where the data resides and data processing occurs.
• Each data center has redundant power, networking, and connectivity, and is housed in a separate
facility.
• A data center typically has 50,000 to 80,000 physical servers.

Points of Presence
• AWS provides a global network of 187 Points of Presence locations.
• Consists of 176 edge locations and 11 Regional edge caches.
• Used with Amazon CloudFront
• A global Content Delivery Network (CDN), that delivers content to end users with reduced
latency.
• Regional edge caches used for content with infrequent access.

AWS infrastructure features


• Elasticity and scalability
• Elastic infrastructure; dynamic adaption of capacity.
• Scalable infrastructure; adapts to accommodate growth.
• Fault-tolerance
• Continues operating properly in the presence of a failure.
• Built-in redundancy of components.
• High availability
• High level of operational performance.
• Minimized downtime.
• No human intervention.

Key takeaways
• The AWS Global Infrastructure consists of Regions and Availability Zones.
• Your choice of a Region is typically based on compliance requirements or to reduce latency.
• Each Availability Zone is physically separate from other Availability Zones and has redundant power,
networking, and connectivity.
• Edge locations, and Regional edge caches improve performance by caching content closer to users.

Section 2: AWS services and service category overview


Storage service category
• Amazon Simple Storage Service (Amazon S3) is an object storage service that offers scalability, data
availability, security, and performance. Use it to store and protect any amount of data for websites,
mobile apps, backup and restore, archive, enterprise applications, Internet of Things (IoT) devices,
and big data analytics.
• Amazon Elastic Block Store (Amazon EBS) is a high-performance block storage that is designed for
use with Amazon EC2 for both throughput and transaction intensive workloads. It is used for a broad
range of workloads, such as relational and non-relational databases, enterprise applications,
containerized applications, big data analytics engines, file systems, and media workflows.
• Amazon Elastic File System (Amazon EFS) provides a scalable, fully managed elastic Network File
System (NFS) file system for use with AWS Cloud services and on-premises resources. It is built to
scale on demand to petabytes, growing and shrinking automatically as you add and remove files. It
reduces the need to provision and manage capacity to accommodate growth.
• Amazon Simple Storage Service Glacier is a secure, durable, and extremely low-cost Amazon S3
cloud storage class for data archiving and long-term backup. It is designed to deliver 11 9s of
durability, and to provide comprehensive security and compliance capabilities to meet stringent
regulatory requirements.

Compute service category


• Amazon Elastic Compute Cloud (Amazon EC2) provides resizable compute capacity as virtual
machines in the cloud.
• Amazon EC2 Auto Scaling enables you to automatically add or remove EC2 instances according to
conditions that you define.
• Amazon Elastic Container Service (ECS) is a highly scalable, high-performance container
orchestration service that supports Docker containers.
• Amazon Elastic Container Registry (ECR) is a fully-managed Docker container registry that makes it
easy for developers to store, manage, and deploy Docker container images.
• AWS Elastic Beanstalk is a service for deploying and scaling web applications and services on
familiar servers such as Apache and Microsoft Internet Information Services (IIS).
• AWS Lambda enables you to run code without provision or managing servers. You pay only for the
compute time that you consume. There is no charge when your code is not running.
• Amazon Elastic Kubernetes Service (EKS) makes it easy to deploy, manage, and scale containerized
applications that use Kubernetes on AWS.
• AWS Fargate is a compute engine for Amazon ECS that allows you to run containers without having
to manage servers or clusters.

Database service category


• Amazon Relational Database Service (RDS) makes it easy to set up, operate, and scale a relational
database in the cloud. It provides resizable capacity while automating time-consuming
administration tasks such as hardware provisioning, database setup, patching, and backups.
• Amazon Aurora is a MySQL and PostgreSQL-compatible relational database. It is up to five times
faster than standard MySQL databases and three times faster than standard PostgreSQL databases.
• Amazon Redshift enables you to run analytic queries against petabytes of data that is stored locally
in Amazon Redshift, and directly against exabytes of data that are stored in Amazon S3. It delivers
fast performance at any scale.
• Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond
performance at any scale, with built-in security, backup and restore, and in-memory caching.

Networking and content delivery service category


• Amazon Virtual Private Cloud (Amazon VPC) enables you to provision logically isolated sections of
the AWS Cloud.
• Elastic Load Balancing automatically distributes incoming application traffic across multiple targets,
such as Amazon EC2 instances, containers, IP addresses, and Lambda functions.
• Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data,
videos, applications, and application programming interfaces (APIs) to customers globally, with low
latency and high transfer speeds.
• AWS Transit Gateway is a service that enables customers to connect their Amazon Virtual Private
Clouds (VPCs) and their on-premises networks to a single gateway.
• Amazon Route 53 is a scalable cloud Domain Name System (DNS) web service designed to give you a
reliable way to route end users to internet applications. It translates names (like www.example.com)
into the numeric IP addresses (like 192.0.2.1) that computers use to connect to each other.
• AWS Direct Connect provides a way to establish a dedicated private network connection from your
data center or office to AWS, which can reduce network costs and increase bandwidth throughput.
• AWS VPN provides a secure private tunnel from your network or device to the AWS global network.

Security, identity, and compliance service category


• AWS Identity and Access Management (IAM) enables you to mange access to AWS services and
resources securely. By using IAM, you can create and manage AWS users and groups. You can use
IAM permissions to allow and deny user and group access to AWS resources.
• AWS Organizations allows you to restrict what services and actions are allowed in your accounts.
• Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps.
• AWS Artifact provides on-demand access to AWS security and compliance reports and select online
agreements.
• AWS Key Management Service (AWS KMS) enables you to create and manage keys. You can use
AWS KMS to control the use of encryption across a wide range of AWS services and in your
applications.
• AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards
applications running on AWS.

AWS cost management service category


• The AWS Cost and Usage Report contains the most comprehensive set of AWS cost and usage data
available, including additional metadata about AWS services, pricing, and reservations.
• AWS Budgets enables you to set custom budgets that alert you when your costs or usage exceed (or
are forecasted to exceed) your budgeted amount.
• AWS Cost Explorer has an easy-to-use interface that enables you to visualize, understand, and
manage your AWS costs and usage over time.
Management and governance service category
• The AWS Management Console provides a web-based user interface for accessing your AWS
account.
• AWS Config provides a service that helps you track resource inventory and changes.
• Amazon CloudWatch allows you to monitor resources and applications.
• AWS Auto Scaling provides features that allow you to scale multiple resources to meet demand.
• AWS Command Line Interface provides a unified tool to manage AWS services.
• AWS Trusted Advisor helps you optimize performance and security.
• AWS Well-Architected Tool provides help in reviewing and improving your workloads.
• AWS CloudTrail tracks user activity and API usage.

Module 4: AWS Cloud Security

Section 1: AWS shared responsibility model

Customer is responsible for security IN the cloud:


• Customer data
• Platform, applications, identity & access management
• OS, network and firewall configuration
• Encryption
• Client-side data encryption and data integrity authentication
• Server-side encryption (file system and/or data)
• Networking traffic protection (encryption, integrity, identity)

AWS is responsible for security OF the cloud:


• Software
• Compute, storage, database, networking
• Hardware/AWS global infrastructure
• Regions, Availability Zones, Edge Locations
AWS responsibilities:
• Physical security of data centers
• Controlled, need-based access
• 24/7 security guards; two-factor auth; access logging and review; video surveillance; and disk
degaussing and destruction
• Hardware and software infrastructure
• Servers, storage devices, etc
• Storage decommissioning, host operating system (OS) access logging, and auditing
• Network infrastructure
• Routers, switches, load balancers, firewalls, and cabling.
• Intrusion detection
• Virtualization infrastructure
• Instance isolation
Customer responsibilities:
• Amazon Elastic Compute Cloud (EC2) instance operating system
• Including patching, maintenance
• Applications
• Passwords, role-based access, etc.
• Security group configuration
• OS or host-based firewalls
• Including intrusion detection or prevention systems
• Network configurations
• Account management
• Login and permission settings for each user

>> Service characteristics and security responsibility

Infrastructure as a service (IaaS)


• Customer has more flexibility over configuring networking and storage settings
• Customer is responsible for managing more aspects of the security
• Customer configures the access controls

Platform as a service (PaaS)


• Customer does not need to manage the underlying infrastructure
• AWS handles the operating system, database patching, firewall configuring, and disaster recovery
• Customer can focus on managing code or data

Software as a Service (SaaS)


• Software is centrally hosted
• Licensed on a subscription model or pay-as-you-go basis
• Services are typically accessed via web browser, mobile app, or API
• Customers do not need to manage the infrastructure that supports the service
• SaaS examples: AWS Trusted Advisor, AWS Shield, Amazon Chime
Section 2: AWS Identity and Access Management (IAM)

• Use IAM to manage access to AWS resources


• A resource is an entity in an AWS account that you can work with
• Example resources: An Amazon EC2 instance or an Amazon S3 bucket.
• Example – Control who can terminate Amazon EC2 instances
• Define fine-grained access rights
• Who can access the resource
• Which resources can be accessed and what can the user do to the resource
• How resources can be accessed
• IAM is a no-cost AWS account feature

IAM: Essential components


• IAM user – A person or application that can authenticate with an AWS account
• IAM group – A collection of IAM users that are granted identical authorization
• IAM policy – The document that defines which resources can be accessed and the level of access to
each resource
• IAM role – Useful mechanism to grant a set of permissions for making AWS service requests

When you define an IAM user, you select what types of access the user is permitted to use.

Programmatic access
• Authenticate using:
• Access key ID
• Secret access key
• Provides AWS CLI and AWS SDK access

AWS Management Console access


• Authenticate using:
• 12-digit Account ID or alias
• IAM user name
• IAM password
• If enabled, multi-factor auth (MFA) prompts for an authentication code

IAM MFA
• MFA provides increased security
• In addition to user name and password, MFA requires a unique authentication code to access AWS
services

IAM: Authorization
• Assign permissions by creating an IAM policy
• Permissions determine which resources and operations are allowed:
• All permissions are implicitly denied by default
• If something is explicitly denied, it is never allowed
• Best practice: Follow the principle of least privilege.
• Note: The scope of IAM service configurations is global. Settings apply across all AWS Regions.

IAM policies
• An IAM policy is a document that defines permissions
• Enables fine-grained access control
• Two types of policies – identity-based and resource-based
• Identity-based policies
• Attach a policy to any IAM entity
• An IAM user, an IAM group, or an IAM role
• Policies specify:
• Actions that may be performed by the entity
• Actions that may not be performed by the entity
• A single policy can be attached to multiple entities
• A single entity can have multiple policies attached to it
• Resource-based policies
• Attached to a resource (such as an S3 bucket)
• Characteristics of resource-based policies—
• Specifies who has access to the resource and what actions they can perform on it
• The policies are inline only, not managed
• Resource-based policies are supported only by some AWS services

IAM Policy Simulator

IAM groups:
• An IAM group is a collection of IAM users
• A group is used to grant the same permissions to multiple users
• Permissions granted by attaching IAM policy or policies to the group
• A user can belong to multiple groups
• There is no default group
• Groups cannot be nested

IAM roles:
• An IAM role is an IAM identity with specific permissions
• Similar to an IAM user
• Attach permissions policies to it
• Different from an IAM user
• Not uniquely associated with one person
• Intended to be assumable by a person, application, or service
• Role provides temporary security credentials
• Examples of how IAM roles are used to delegate access –
• Used by an IAM user in the same AWS account as the role
• Used by an AWS service – such as Amazon EC2 – in the same account as the role
• Used by an IAM user in a different AWS account than the role

>> Example use of an IAM role


Scenario:
• An application that runs on an EC2 instance needs access to an S3 bucket

Solution:
• Define an IAM policy that grants access to the S3 bucket.
• Attach the policy to a role
• Allow the EC2 instance to assume the role.

Section 2 key takeaways:


• IAM policies are constructed with JavaScript Object Notation (JSON) and define permissions.
• IAM policies can be attached to any IAM entity.
• Entities are IAM users, IAM groups, and IAM roles
• An IAM user provides a way for a person, application, or service to authenticate to AWS
• An IAM group is a simple way to attach the same policies to multiple users
• An IAM role can have permissions policies attached to it, and can be used to delegate temporary
access to users or applications

Section 3: Securing a new AWS account

AWS account root user access versus IAM access:


• Best practice: Do not use the AWS account root user except when necessary.
• Access to the account root user requires logging in with the email address (and password) that
you used to create the account.
• Example actions that can only be done with the account root user:
• Update the account root user password.
• Change the AWS support plan.
• Restore an IAM user’s permissions.
• Change account settings (for example, contact information, allowed Regions).
• Full list of tasks that require AWS account root user credentials

Securing a new AWS account:


• Step 1: Stop using the account as root user as soon as possible
• The account root user has unrestricted access to all your resources
• To stop using the account root user:
1. While you are logged in as the account root user, create an IAM user for yourself. Save the
access keys if needed
2. Create an IAM group, give it full administrator permissions, and add the IAM user to the group
3. Disable and remove your account root user access keys if they exist
4. Enable a password policy for users
5. Sign in with your new IAM user credentials
6. Store your account root user credentials in a secure place

• Step 2: Enable multi-factor authentication (MFA).


• Require MFA for your account root user and for all IAM users.
• You can also use MFA to control access to AWS service APIs.
• Options for retrieving the MFA token –
• Virtual MFA-compliant applications:
• Google Authenticator
• Authy Authenticator (Windows phone app)
• U2F security key devices:
• For example, YubiKey
• Hardware MFA options:
• Key fob or display card offered by Gemalto

• Step 3: Use AWS CloudTrail


• CloudTrail tracks user activity on your account
• Logs all API requests to resources in all supported services on your account
• Basic AWS CloudTrail event history is enabled by default and is free.
• It contains all management event data on latest 90 days of account activity
• To access CloudTrail –
1. Log in to the AWS Management Console and choose the CloudTrail service
2. Click Event history to view, filter, and search the last 90 days of events.
• To enable logs beyond 90 days and enable specified event alerting, create a trail
1. From the CloudTrail Console trails page, click Create trail
2. Give it a name, apply it to all Regions, and create a new Amazon S3 bucket for log storage
3. Configure access restrictions on the S3 bucket (for example, only admin users should have
access)

• Step 4: Enable a billing report, such as the AWS Cost and Usage Report
• Billing reports provide information about your use of AWS resources and estimated costs for that
use
• AWS delivers the reports to an Amazon S3 bucket that you specify
• Report is updated at least once per day
• The AWS Cost and Usage Report tracks your AWS usage and provides estimated charges associated
with your AWS account, either by the hour or by the day

Section 3 key takeaways:


• Secure logins with multi-factor auth (MFA)
• Delete account root user access keys
• Create individual IAM users and grant permissions according to the principle of least privilege
• Use groups to assign permissions to IAM users
• Configure a strong password policy
• Delegate using roles instead of sharing credentials
• Monitor account activity by using AWS CloudTrail

Section 4: Securing accounts

AWS Organizations:
• AWS Organizations enables you to consolidate multiple AWS accounts so that you centrally manage
them
• Security features of AWS Organizations:
• Group AWS accounts into organizational units (OUs) and attach different access policies to
each OU
• Integration and support for IAM
• Permissions to a user are the intersection of what is allowed by AWS Organizations and
what is granted by IAM in that account
• Use service control policies to establish control over the AWS services and API actions that each
AWS account can access

Service control policies:


• Service control policies (SCPs) offer centralized control over accounts.
• Limit permissions that are available in an account that is part of an organization
• Ensures that accounts comply with access control guidelines
• SCPs are similar to IAM permissions policies –
• They use similar syntax
• However, an SCP never grants permissions
• Instead, SCPs specify the maximum permissions for an organization

AWS Key Management Service (AWS KMS) features:


• Enables you to create and manage encryption keys
• Enables you to control the use of encryption across AWS services and in your applications
• Integrates with AWS CloudTrail to log all key usage
• Uses hardware security modules (HSMs) that are validated by Federal Information Processing
Standards (FIPS) 140-2 to protect keys

Amazon Cognito features:


• Adds user sign-up, sign-in, and access control to your web and mobile applications
• Scales to millions of users
• Supports sign-in with social identity providers, such as Facebook, Google, and Amazon; and
enterprise identity providers, such as Microsoft Active Directory via Security Assertion Markup
Language (SAML) 2.0

AWS Shield features:


• Is a managed distributed denial of service (DDoS) protection service
• Safeguards applications running on AWS
• Provides always-on detection and automatic inline mitigations
• AWS Shield Standard enabled for at no additional cost. AWS Shield Advanced is an optional paid
service
• Use it to minimize application downtime and latency

Section 5: Securing data on AWS

Encryption of data at rest:


• Encryption encodes data with a secret key, which makes it unreadable
• Only those who have the secret key can decode the data
• AWS KMS can manage your secret keys
• AWS supports encryption of data at rest
• Data at rest = Data stored physically (on disk or on tape)
• You can encrypt data stored in any service that is supported by AWS KMS, including:
• Amazon S3, EBS, Elastic File System (EFS), RDS managed databases

Encryption of data in transit:


• Encryption of data in transit (data moving across a network)
• Transport Layer Security (TLS) – formerly SSL – is an open standard protocol
• AWS Certificate Manager provides a way to manage, deploy, and renew TLS or SSL certificates
• Secure HTTP (HTTPS) creates a secure tunnel
• Uses TLS or SSL for the bidirectional exchange of data
• AWS services support data in transit encryption

Securing Amazon S3 buckets and objects:


• Newly created S3 buckets and objects are private and protected by default
• When use cases require sharing data objects on Amazon S3 –
• It is essential to manage and control the data access
• Follow the permissions that follow the principle of least privilege and consider using Amazon
S3 encryption
• Tools and options for controlling access to S3 data include –
• Amazon S3 Block Public Access feature: simple to use
• Bucket policies
• Access control lists (ACLs): A legacy access control mechanism
• AWS Trusted Advisor bucket permission check: A free feature

Section 6: Working to ensure compliance

AWS compliance programs:


• Customers are subject to many different security and compliance regulations and requirements
• AWS engages with certifying bodies and independent auditors to provide customers with detailed
information about the policies, processes, and controls that are established and operated by AWS
• Compliance programs can be broadly categorized –
• Certifications and attestations
• Assessed by a third-party, independent, auditor
• Examples: ISO 27001, 27017, 27018, and ISO/IEC 9001
• Laws, regulations, and privacy
• AWS provides security features and legal agreements to support compliance
• Examples: EU General Data Protection Regulation (GDPR), HIPAA
• Alignments and frameworks
• Industry- or function-specific security or compliance requirements
• Examples: Center for Internet Security (CIS), EU-US Privacy Shield certified

AWS Config:
• Assess, audit, and evaluate the configurations of AWS resources
• Use for continuous monitoring of configurations
• Automatically evaluate recorded configurations versus desired configurations
• Review configuration changes
• View detailed configuration histories
• Simplify compliance auditing and security analysis

AWS Artifact:
• Is a resource for compliance-related information
• Provide access to security and compliance reports, and select online agreements
• Can access example downloads:
• AWS ISO certifications
• Payment Card Industry (PCI) and Service Organization Control (SOC) reports
• Access AWS Artifact directly from the AWS Management Console
• Under Security, Identify & Compliance, click Artifact

Section 6 key takeaways


• AWS security compliance programs provide information about the policies, processes, and controls
that are established and operated by AWS
• AWS Config is used to assess, audit, and evaluate the configurations of AWS resources
• AWS Artifact provides access to security and compliance reports

Module 5 – Networking and Content Delivery

Section 1: Networking Basics


• Networks – A computer network is two or more client machines that are connected together to
share resources. A network can be logically partitioned into subnets. Networking requires a
networking device (such as a router or switch) to connect all the clients together and enable
communication between them.
• IP addresses – Each client machine in a network has a unique IP address that identifies it.
• IPv4 address is 32-bit
• IPv6 address is 128-bit
• A common method to describe networks is Classless Inter-Domain Routing (CIDR). The CIDR address
is expressed as follows:
• An IP address
• Next, a slash character (/)
• Finally, a number that tells you how many bits of the routing prefix must be fixed or allocated
for the network identifier
• Open Systems Interconnection (OSI) model is a conceptual model that is used to explain how data
travels over a network.

Section 2: Amazon VPC

Amazon VPC
• Enables you to provision a logically isolated section of the AWS Cloud where you can launch AWS
resources in a virtual network that you define
• Gives you control over your virtual networking resources, including:
• Selection of IP address range
• Creation of subnets
• Configuration of route tables and network gateways
• Enables you to customize the network configuration for your VPC
• Enables you to use multiple layers of security

VPCs
• Logically isolated from other VPCs
• Dedicated to your AWS account
• Belong to a single AWS Region and can span multiple Availability Zones
Subnets
• Range of IP addresses that divide a VPC
• Belong to a single Availability Zone
• Classified as public or private
IP addressing
• When you create a VPC, you assign it to an IPv4 CIDR block (range of private IPv4 addresses)
• You cannot change the address range after you create the VPC.
• The largest IPv4 CIDR block size is /16
• The smallest IPv4 CIDR block size is /28
• IPv6 is also supported (with a different block size limit)
• CIDR blocks of subnets cannot overlap

Reserved IP addresses
• Example: A VPC with an IPv4 CIDR block of 10.0.0.0/16 has 65,536 total IP addresses. The VPC has
four equal-sized subnets. Only 251 IP addresses are available for us by each subnet (Each subnet
using /24).

Public IP address types


• Public IPv4 address
• Manually assigned through an Elastic IP address
• Automatically assigned through the auto-assign public IP address settings at the subnet level
• Elastic IP address
• Associate with an AWS account
• Can be allocated and remapped anytime
• Additional costs might apply

Elastic network interface


• An elastic network interface is a virtual network interface that you can:
• Attach to an instance
• Detach from the instance, and attach to another instance to redirect network traffic
• It’s attributes follow when it is reattached to a new instance
• Each instance in your VPC has a default network interface that is assigned a private IPv4 address
from the IPv4 address range of your VPC

Routing tables and routes


• A route table contains a set of rules (or routes) that you can configure to direct network traffic from
your subnet.
• Each route specifies a destination and a target.
• By default, every route table contains a local route for communication within the VPC.
• Each subnet must be associate with a route table (at most one)
Section 2 key takeaways:
• A VPC is a logically isolated section of the AWS Cloud
• A VPC belongs to one Region and requires a CIDR block
• A VPC is subdivided into subnets
• A subnet belongs to one Availability Zone and requires a CIDR block
• Route tables control traffic for a subnet
• Route tables have a built-in local route
• You add additional routes to the table
• The local route cannot be deleted

Section 3: VPC Networking


• An internet gateway is a scalable, redundant, and highly available VPC component that allows
communication between instances in your VPC and the internet.
• Provide a target in your VPC route tables for internet-routable traffic
• Perform network address translation for instances that were assigned public IPv4 addresses
• To make a subnet public, you attach an internet gateway to your VPC and add a route to the route
table to send non-local traffic through the internet gateway to the internet (0.0.0.0/0)
• A network address translation (NAT) gateway enables instances in a private subnet to connect to
the internet or other AWS services, but prevents the internet from initiating a connection with those
instances.

VPC Sharing
• VPC sharing enables customers to share subnets with other AWS accounts in the same organization
in AWS Organizations. VPC sharing enables multiple AWS accounts to create their application
resources – such as EC2, RDS, Redshift clusters, and Lambda functions – into shared, centrally
managed VPCs.
VPC sharing offers several benefits:
• Separation of duties – centrally controlled VPC structure, routing, IP address allocation
• Ownership – Application owners continue to own resources, accounts, and security groups
• Security groups – VPC sharing participants can reference the security group IDs of each other
• Efficiencies – Higher density in subnets, efficient use of VPNs and AWS Direct Connect
• No hard limits – Hard limits can be avoided – for example, 50 virtual interfaces per AWS Direct
Connect connection through simplified network architecture
• Optimized costs – Costs can be optimized through the reuse of NAT gateways, VPC interface
endpoints, and intra-Availability Zone traffic

VPC peering
• You can connect VPCs in your own AWS account, between AWS accounts, or between AWS Regions
• Restrictions:
• IP spaces cannot overlap
• Transitive peering is not supported
• You can only have one peering resource between the same two VPCs
• AWS Site-to-Site VPN – By default, instances that you launch into a VPC cannot communicate with a
remote network. To connect your VPC to your remote network:
• Create a new virtual gateway device (called a virtual private network (VPN) gateway) and attach
it to your VPC
• Define the configuration of the VPN device or the customer gateway.
• Create a custom route table to point corporate data center-bound traffic to the VPN gateway.
You also must update security group rules.
• Establish an AWS Site-to-Site VPN connection to link the two systems together
• Configure routing to pass traffic through the connection

AWS Direct Connect


• Performance can be negatively affected if your data center is located far away from your AWS
Region. AWS Direct Connect enables you to establish a dedicated, private network connection
between your network and one of the DX (AWS Direct Connect) locations. This private connection
can reduce your network costs, increase bandwidth throughput, and provide a more consistent
network experience than internet-based connections. DX uses open standard 802.1q virtual local
area networks (VLANs)

VPC endpoints
• A VPC endpoint is a virtual device that enables you to privately connect your VPC to supported AWS
services and VPC endpoint services that are powered by AWS PrivateLink.
• Does not require an internet gateway, NAT device, VPN connection, or DX connection.
• Interface endpoints (powered by AWS PrivateLink)
• Gateway endpoints (Amazon S3 and Amazon DynamoDB)

AWS Transit Gateway


• Simplifies your networking model
• You only need to create and manage a single connection from the central gateway into each VPC,
on-premises data center, or remote office across your network.
• Acts as a hub that controls how traffic is routed among all the connected networks, which act like
spokes.
• This hub-and-spoke model significantly simplifies management and reduces operational costs
because each network only needs to connect to the transit gateway and not to every other network.

Section 3 takeaways
• There are several VPC networking options, which include:
• Internet gateway
• NAT gateway
• VPC endpoint
• VPC peering
• VPC sharing
• AWS Site-to-Site VPN
• AWS Direct Connect
• AWS Transit Gateway
• You can use the VPC Wizard to implement your design

Section 4: VPC Security

Security groups
• Act at the instance level, not the subnet level
• Acts as a virtual firewall for your instance, and it controls and filters inbound and outbound traffic
• Security groups have rules that control inbound and outbound instance traffic
• Default security groups deny all inbound traffic and allow all outbound traffic
• Security groups are stateful

Custom security groups


• You can specify allow rules, but not deny rules
• All rules are evaluated before the decision to allow traffic

Network access control lists (network ACLs)


• Act at the subnet level
• An optional layer of security for your VPC. It acts as a firewall for controlling traffic in and out of one
or more subnets.
• To add another layer of security to your VPC, you can set up network ACLs with rules that are similar
to your security groups.
• A network ACL has separate inbound and outbound rules, and each rule can either allow or deny
traffic
• Default network ACLs allow all inbound and outbound IPv4 traffic
• Network ACLs are stateless

Custom network ACLs


• Custom network ACLs deny all inbound and outbound traffic until you add rules
• You can specify both allow and deny rules
• Rules are evaluated in number order, starting with the lowest number

Security Groups vs Network ACLs


Section 4 Key Takeaways
• Build security into your VPC architecture:
• Isolate subnets if possible
• Choose the appropriate gateway device or VPN connection for your needs
• Use firewalls
• Security groups and network ACLs are firewall options that you can use to secure your VPC

Section 5: Amazon Route 53

Amazon Route 53
• Is a highly available and scalable Domain Name System (DNS) web service
• Is used to route end users to internet applications by translating names (like www.example.com)
into numeric IP addresses (like 192.0.2.1) that computers use to connect to each other
• Is fully compliant with IPv4 and IPv6
• Connects user requests to infrastructure running in AWS and also outside of AWS
• Is used to check the health of your resources
• Features traffic flow
• Enables you to register domain names

Amazon Route 53 supported routing


• Simple routing – Use in single-server environments
• Weighted round robin routing – Assign weights to resource record sets to specify the frequency
• Latency routing – Help improve your global applications
• Geolocation routing – Route traffic based on location of your users
• Geo-proximity routing – Route traffic based on location of your resources
• Failover routing – Fail over to a backup site if your primary site becomes unreachable
• Multivalue answer routing – Respond to DNS queries with up to eight healthy records selected at
random

Multi-region deployment
• Latency-based routing to the Region
• Load balancing routing to the Availability Zone

DNS failover
• Improve the availability of your applications that run on AWS by:
• Configuring backup and failover scenarios for your own applications
• Enabling highly available multi-region architectures on AWS
• Creating health checks

Section 5 Key Takeaways


• Amazon Rout 53 is a highly available and scalable cloud DNS web service that translates domain
names into numeric IP addresses
• Supports several types of routing policies
• Multi-Region deployment improves your application’s performance for a global audience
• You can use Amazon Route 53 failover to improve the availability of your applications.

Section 6: Amazon CloudFront

Content delivery network (CDN)


• Is a globally distributed system of caching servers
• Caches copies of commonly requested files (static content)
• Delivers a local copy of the requested content from a nearby cache edge or Point of Presence
• Accelerates delivery of dynamic content
• Improves application performance and scaling

Amazon CloudFront
• Fast, global, and secure CDN service
• Global network of edge locations and Regional edge caches
• Self-service model
• Pay-as-you-go pricing
Benefits:
• Fast and global
• Security at the edge
• Network-level and application-level protection. Various built-in protections, such as AWS Shield
Standard. You can also use configurable features, such as AWS Certificate Manager (ACM), to
create and manage custom Secure Sockets Layer (SSL) certificates at no extra cost.
• Highly programmable
• Integrates with Lambda@Edge so that you can run custom code across AWS locations
worldwide, which enables you to move complex application logic closer to users to improve
responsiveness. Offers CI/CD environments.
• Deeply integrated with AWS
• Cost-effective

Amazon CloudFront pricing


• Data transfer out
• Charged for the volume of data transferred out from Amazon CloudFront edge location to the
internet or to your origin
• HTTP(S) requests
• Charged for number of HTTP(S) requests
• Invalidation requests
• No additional charge for the first 1,000 paths that are requested for invalidation each month.
Thereafter, $0.005 per path is requested for invalidation.
• Dedicated IP custom SSL
• $600 per month for each custom SSL certificate that is associate with one or more CloudFront
distributions that use the Dedicated IP version of custom SSL certificate support

Section 6 Key Takeaways


• A CDN is a globally distributed system of caching servers that accelerates delivery of content
• Amazon CloudFront is a fast CDN service that securely delivers data, videos, applications, and APIs
over a global infrastructure with low latency and high transfer speeds
• Amazon CloudFront offers many benefits

Module 6: Compute

Section 1: Compute services overview

AWS offers many compute services. Here is a brief summary of what each compute service offers:
• Elastic Compute Cloud (EC2) provides resizable virtual machines.
• IaaS | Instance-based | Virtual machines
• Provision virtual machines that you can manage as you choose
• A familiar concept to many IT professionals
• EC2 Auto Scaling supports application availability by allowing you to define conditions that will
automatically launch or terminate EC2 instances.
• Elastic Container Registry (ECR) is used to store and retrieve Docker images.
• Container-based computing | Instance-based
• Spin up and execute jobs more quickly
• Elastic Container Service (ECS) is a container orchestration service that supports Docker.
• Container-based computing | Instance-based
• Spin up and execute jobs more quickly
• VMware Cloud on AWS enables you to provision a hybrid cloud without custom hardware.
• Elastic Beanstalk provides a simple way to run and manage web applications.
• PaaS | For web applications
• Focus on your code (building your application)
• Can easily tie into other services – databases, DNS, etc
• Fast and easy to get started
• Lambda is a serverless compute solution. You pay only for the compute time that you use.
• Serverless computing | Function-based | Low-cost
• Write and deploy code that executes on a schedule or that can be triggered by events
• Use when possible (architect for the cloud)
• A relatively new concept for many IT staff members, but easy to use after you learn how
• Elastic Kubernetes Service (EKS) enables you to run managed Kubernetes on AWS.
• Container-based computing | Instance-based
• Spin up and execute jobs more quickly
• Lightsail provides a simple-to-use service for building an application or website.
• Batch provides a tool for running batch jobs at any scale.
• Fargate provides a way to run containers that reduce the need for you to manage servers or
clusters.
• Container-based computing | Instance-based
• Spin up and execute jobs more quickly
• Outposts provides a way to run select AWS services in your on-premises data center.
• Serverless Application Repository provides a way to discover, deploy, and publish serverless
applications.

Choosing the optimal compute service


• The optimal compute service or services that you use will depend on your use case
• Some aspects to consider –
• What is your application design?
• What are your usage patterns?
• Which configuration settings will you want to manage?
• Selecting the wrong compute solution for an architecture can lead to lower performance efficiency
• A good starting place – Understand the available compute options

Best practices include:


• Evaluate the available compute options
• Understand the available compute configuration options
• Collect computer-related metrics
• Use the available elasticity of resources
• Re-evaluate compute needs based on metrics

Section 2: Amazon EC2

Example server uses of Amazon EC2 instances:


• Application, web, database, game, mail, media, catalog, file, computing, proxy

(1) Amazon Elastic Compute Cloud (EC2)


• Elastic refers to the fact that you can easily increase or decrease the number of servers you run to
support an application automatically.
• Compute refers to reason why most users run servers in the first place, which is to host running
applications or process data.
• Cloud refers to the fact that the EC2 instances that you run are hosted in the cloud
• Provides virtual machines – referred to as EC2 instances – in the cloud
• Gives you full control over the guest operating system (Windows or Linux) on each instance
• You can launch instances of any size into an Availability Zone anywhere in the world
• Launch instances from Amazon Machine Images (AMIs)
• Launch instances with a few clicks or a line of code, and they are ready in minutes
• You can control traffic to and from instances
• Most server operating systems are supported, including:
• Windows 2008, 2012, 2016, and 2019, Red Hat, SuSE, Ubuntu, and Amazon Linux

• Amazon Machine Image (AMI)


• Is a template that is used to create an EC2 instance
• Contains a Windows or Linux operating system
• Often also has some software pre-installed
• AMI choices:
• Quick Start – Linux and Windows AMIs that are provided by AWS
• My AMIs – Any AMIs that you created
• AWS Marketplace – Pre-configured templates from third parties
• Community AMIs – AMIs shared by others; use at your own risk

(2) Select an instance type


• Consider your use case
• How will the EC2 instance you create be used?
• The instance type that you choose determines –
• Memory (RAM)
• Processing power (CPU)
• Disk space and disk type (Storage)
• Network performance
• Instance type categories –
• General purpose
• Compute optimized
• Memory optimized
• Storage optimized
• Accelerated computing
• Instance types offer family, generation, and size

Instance type naming


• Example: t3.large
• T is the family name
• 3 is the generation number
• Large is the size

Instance types: Networking features


• The network bandwidth (Gbps) varies by instance type.
• To maximize networking and bandwidth performance of your instance type:
• If you have interdependent instances, launch them into a cluster placement group.
• Enable enhanced networking
• Enhanced networking types are supported on most instance types
• Elastic Network Adapter (ENA): Supports network speeds of up to 100 Gbps
• Intel 82599 Virtual Function interface: Supports network speeds of up to 10 Gbps

(3) Specify network settings


• Where should the instance be deployed?
• Identify the VPC and optionally the subnet
• Should a public IP address be automatically assigned?
• To make it internet-accessible

(4) Attach IAM role (optional)


• Will software on the EC2 instance need to interact with other AWS services?
• If yes, attach an appropriate IAM Role
• An AWS IAM role that is attached to an EC2 instance is kept in an instance profile
• You are not restricted to attaching a role only at instance launch
• You can also attach a role to an instance that already exists

(5) User data script (optional)


• Optionally specify a user data script at instance launch
• Use user data scripts to customize the runtime environment of your instance
• Script executes the first time the instance starts
• Can be used strategically
• For example, reduce the number of custom AMIs that you build and maintain

(6) Specify storage


• Configure the root volume
• Where the guest operating system is installed
• Attach additional storage volumes (optional)
• AMI might already include more than one volume
• For each volume, specify:
• The size of the disk (in GB)
• The volume type
• SSD or HDD
• If the volume will be deleted when the instance is terminated
• If encryption should be used

Amazon EC2 storage options


• Elastic Block Store (EBS)
• Durable, block-level storage volumes
• You can stop the instance and start it again, and the data will still be there
• EC2 Instance Store –
• Ephemeral storage is provided on disks that are attached to the host computer where the EC2
instance is running
• If the instance stops, data stored here is deleted
• Other options for storage (not for the root volume) –
• Mount an Amazon Elastic File System (EFS) file system
• Connect to Amazing Simple Storage Service (S3)
(7) Add tags
• A tag is a label that you can assign to an AWS resource
• Consists of a key and an optional value
• Tagging is how you can attach metadata to an EC2 instance
• Potential benefits of tagging – Filtering, automation, cost allocation, and access control
• Tagging Best Practices (PDF)

(8) Security group settings


• A security group is a set of firewall rules that control traffic to the instance
• It exists outside of the instance’s guest OS
• Create rules that specify the source and which ports that network communications can use
• Specify the port number and the protocol, such as TCP, UDP, or ICMP
• Specify the source (for example, an IP address or another security group) that is allowed to use
the rule.

(9) Identify or create the key pair


• At instance launch, you specify an existing key pair or create a new key pair
• A key pair consists of –
• A public key that AWS stores
• A private key file that you store
• It enables secure connections to the instance
• For Windows AMIs –
• Use the private key to obtain the administrator password that you need to log in to your
instance
• For Linux AMIs –
• Use the private key to use SSH to securely connect to your instance

Amazon EC2 instance lifecycle


• Pending -- When an instance is first launched from an AMI, or when you start a stopped instance, it
enters the pending state when the instance is booted and deployed to a host computer. The
instance type that you specified at launch determines the hardware of the host computer for your
instance.
• Running – When the instance is fully booted and ready, it exits the pending state and enters the
running state. You can connect over the internet to your running instance.
• Rebooting – AWS recommends you reboot an instance by using the EC2 console, CLI, or SDKs
instead of invoking a reboot from within the guest OS. A rebooted instance stays on the same
physical host, maintains the same public DNS name and public IP address, and if it has instance
store volumes, it retains the data on those volumes.
• Shutting down – This state is an intermediary state between running and terminated.
• Terminated – A terminated instance remains visible in the Amazon EC2 console for a while before
the virtual machine is deleted. However, you cannot connect to or recover a terminated instance.
• Stopping – Instances that are backed by EBS can be stopped. They enter the stopping state before
they attain the fully stopped state.
• Stopped – A stopped instance will not incur the same cost as a running instance. Starting a stopped
instance puts it back into the pending state, which moves the instance to a new host machine.

Consider using an Elastic IP address


• Rebooting an instance will not change any IP addresses or DNS hostnames
• When an instance is stopped and then started again –
• The public IPv4 address and external DNS hostname will change
• The private IPv4 address and internal DNS hostname do not change
• If you require a persistent public IP address –
• Associate an Elastic IP address with the instance
• Elastic IP address characteristics –
• Can be associated with instances in the Region as needed
• Remains allocated to your account until you choose to release it

EC2 instance metadata


• Instance metadata is data about your instance
• While you are connected to an instance, you can view it –
• In a browser: http://169.254.169.254/latest/meta-data/
• In a terminal window: curl http://169.254.169.254/latest/meta-data
• Example retrievable values –
• Public IP address, private IP address, public hostname, instance ID, security groups, Region,
Availability Zone
• Any user data specified at instance launch can also be access at:
http://169.254.169.254/latest/user-data
• It can be used to configure or manage a running instance
• For example, author a configuration script that reads the metadata and uses it to configure
applications or OS settings

Use Amazon CloudWatch to monitor EC2 instances


• Provides near-real-time metrics
• Provides charts in the EC2 console Monitoring tab that you can view
• Maintains 15 months of historical data

Basic monitoring
• Default, no additional cost
• Metric data sent to CloudWatch every 5 minutes
Detailed monitoring
• Fixed monthly rate for seven pre-selected metrics
• Metric data delivered every 1 minute

Section 2 Key Takeaways


• Amazon EC2 enables you to run Windows and Linux virtual machines in the cloud
• You launch EC2 instances from an AMI template into a VPC in your account
• You can choose from many instance types. Each instance type offers different combinations of CPU,
RAM, storage, and networking capabilities
• You can configure security groups to control access to instances (specify allowed ports and source)
• User data enables you to specify a script to run the first time that an instance launches
• Only instances that are backed by Amazon EBS can be stopped
• You can use Amazon CloudWatch to capture and review metrics on EC2 instances
Section 3: Amazon EC2 cost optimization

On-Demand Instances
• Pay by the hour
• No long-term commitments
• Eligible for the AWS Free Tier

Reserved Instances
• Full, partial, or no upfront payment for instance you reserve
• Discount on hourly charge for that instance
• 1-year or 3-year term

Spot Instances
• Instances run as long as they are available and your bid is above the Spot Instance price
• They can be interrupted by AWS with a 2-minute notification
• Interruption options include terminated, stopped or hibernated
• Prices can be significantly less expensive compared to On-Demand Instances
• Good choice when you have flexibility in when your applications can run

Dedicated Hosts
• A physical server with EC2 instance capacity fully dedicated to your use

Dedicated Instances
• Instances that run in a VPC on hardware that is dedicated to a single customer

Scheduled Reserved Instances


• Purchase a capacity reservation that is always available on a recurring schedule you specify
• 1-year term

EC2 pricing models benefits and use cases


• On-Demand Instances
• Benefit: Low cost and flexibility
• Use case: Short-term, spiky, or unpredictable workloads
• Use case: Application development or testing
• Spot Instances
• Benefit: Large scale, dynamic workload
• Use case: Applications with flexible start and end times
• Use case: Applications only feasible at very low compute prices
• Use case: Users with urgent computing needs for large amounts of additional capacity
• Reserved Instances
• Benefit: Predictability ensures compute capacity is available when needed
• Use case: Steady state or predictable usage workloads
• Use case: Applications that require reserved capacity, including disaster recovery
• Use case: Users able to make upfront payments to reduce total computing costs even further
• Dedicated Hosts
• Benefit: Save money on licensing costs
• Benefit: Help meet compliance and regulatory requirements
• Use case: Bring your own license (BYOL)
• Use case: Compliance and regulatory restrictions
• Use case: Usage and licensing tracking
• Use case: Control instance placement

Four pillars of cost optimization


• 1) Right-size – Choose the right balance of instance types. Notice when servers can be either sized
down or turned off, and still meet your performance requirements
• Provisions instances to match the need
• CPU, memory, storage, and network throughput
• Select appropriate instance types for your use
• Use Amazon CloudWatch metrics
• How idle are instances? When?
• Downsize instances
• Best practice: Right size, then reserve
• 2) Increase elasticity – Design your deployments to reduce the amount of server capacity that is idle
by implementing deployments that are elastic, such as deployments that use automatic scaling to
handle peak loads
• Stop or hibernate Amazon EBS-backed instances that are not actively in use
• Example: non-production development or test instances
• Use automatic scaling to match needs based on usage
• Automated and time-based elasticity
• 3) Optimal pricing model – Recognize the available pricing options. Analyze your usage patters so
that you can run EC2 instances with the right mix of pricing options
• Leverage the right pricing model for your use case
• Optimize and combine purchase types
• Examples:
• Use On-Demand Instance and Spot Instances for variable workloads
• Use Reserved Instances for predictable workloads
• Consider serverless solutions (AWS Lambda)
• 4) Optimize storage choices – Analyze the storage requirements of your deployments. Reduce
unused storage overhead when possible, and choose less expensive storage options if they can still
meet your requirements for storage performance.
• Reduce costs while maintaining storage performance and availability
• Resize EBS volumes
• Change EBS volume types
• Can you meet performance requirements with less expensive storage?
• Delete EBS snapshots that are no longer needed
• Identify the most appropriate destination for specific types
• Does the application need the instance to reside on Amazon EBS?
• Amazon S3 storage options with lifecycle policies can reduce costs

Measure, monitor, and improve


• Cost optimization is an ongoing process
• Recommendations –
• Define and enforce cost allocation tagging
• Define metrics, set targets, and review regularly
• Encourage teams to architect for cost
• Assign the responsibility of optimization to an individual or to a team

Section 3 Key Takeaways


• Amazon EC2 pricing models include On-Demand Instances, Reserved Instances, Spot Instances,
Dedicated Instances, and Dedicated Hosts
• Spot Instances can be interrupted with a 2-minute notification. However, they can offer significant
cost savings over On-Demand Instances
• The four pillars of cost optimization are:
• Right size | Increase elasticity | Optimal pricing model | Optimize storage choices

Section 4: Container services

Container basics
• Containers are a method of operating system virtualization
• Benefits –
• Repeatable
• Self-contained execution environments
• Software runs the same in different environments
• Developer’s laptop, test, production
• Faster to launch and stop or terminate then virtual machines
What is Docker?
• Docker is a software platform that enables you to build, test, and deploy applications quickly
• You run containers on Docker
• Containers are created from a template called an image
• A container has everything a software application needs to run
• Libraries, system tools, code, runtime
• Amazon Elastic Container Service (ECS)
• A highly scalable, fast, container management service
• Key benefits –
• Orchestrates the execution of Docker containers
• Maintains and scales the fleet of nodes that run your containers
• Removes the complexity of standing up the infrastructure
• Integrated with features that are familiar to Amazon EC2 service users –
• Elastic Load Balancing
• Amazon EC2 security groups
• Amazon EBS volumes
• IAM roles

Amazon ECS cluster options


• Key question: Do you want to manage the Amazon ECS cluster that runs the containers?
• If yes, create an Amazon ECS cluster backend by Amazon EC2 (provides more granular control
over infrastructure)
• If no, create an Amazon ECS cluster backend by AWS Fargate (easier to maintain, focus on your
applications)

What is Kubernetes?
• Kubernetes is open source software for container orchestration
• Deploy and manage containerized applications at scale
• The same toolset can be used on premises and in the cloud
• Complements Docker
• Docker enables you to run multiple containers on a single OS host
• Kubernetes orchestrates multiple Docker hosts (nodes)
• Automates –
• Container provisioning
• Networking
• Load distribution
• Scaling

Amazon Elastic Kubernetes Service (Amazon EKS)


• Enables you to run Kubernetes on AWS
• Certified Kubernetes conformant (supports easy migration)
• Supports Linux and Windows containers
• Compatible with Kubernetes community tools and supports popular Kubernetes add-ons

Use Amazon EKS to –


• Manage clusters of Amazon EC2 compute instances
• Run containers that are orchestrated by Kubernetes on those instances

Amazon Elastic Container Registry (ECR)


• Amazon ECR is a fully managed Docker container registry that makes it easy for developers to store,
manage, and deploy Docker container images
• Supports Docker Registry HTTP API version 2, which enables you to interact with ECR by using
Docker CLI commands or your preferred Docker tools
• You can transfer your container images to and from Amazon ECS via HTTPS. Your images are
also automatically encrypted at rest using Amazon S3 server-side encryption

Section 4 Key Takeaways


• Containers can hold everything that an application needs to run
• Docker is a software platform that packages software into containers
• A single application can span multiple containers
• Amazon Elastic Container Service (ECS) orchestrates the execution of Docker containers
• Kubernetes is open source software for container orchestration
• Amazon Elastic Kubernetes Service (EKS) enables you to run Kubernetes on AWS
• Amazon Elastic Container Registry (ECR) enables you to store, manage, and deploy your Docker
containers
Section 5: Introduction to AWS Lambda
Benefits of Lambda
• It supports multiple programming languages
• Completely automated administration
• Built-in fault tolerance
• It supports the orchestration of multiple functions
• Pay-per-use pricing

• An event source is an AWS service or a developer-created application that produces events that
trigger an AWS Lambda function to run
• You can invoke Lambda functions directly with the Lambda console, the Lambda API, the AWS SDK,
AWS CLI, and AWS toolkits.

AWS Lambda function configuration


• When you use the AWS Management Console to create a Lambda function, you first give the
function a name. Then, you specify:
• The runtime environment the function will use (for example, a version of Python or Node.js)
• An execution role (to grant IAM permission to the function so that it can interact with other
AWS services as necessary)
AWS Lambda limits
• Soft limits per Region:
• Concurrent executions = 1,000
• Function and layer storage = 75 GB
• Hard limits for individual functions:
• Maximum function memory allocation = 3,008 MB
• Function timeout = 15 minutes
• Deployment package size = 250 MB unzipped, including layers

Section 5 Key Takeaways


• Serverless computing enables you to build and run applications and services without provisioning or
managing servers
• AWS Lambda is a serverless compute service that provides built-in tolerance and automatic scaling
• An event source is an AWS service or developer-created application that triggers a Lambda function
to run
• The maximum memory allocation for a single Lambda function is 3,008 MB
• The maximum execution time for a Lambda function is 15 minutes

Section 6: Introduction to AWS Elastic Beanstalk

• An easy way to get web applications up and running


• A managed service that automatically handles –
• Infrastructure provisioning and configuration
• Deployment
• Load balancing
• Automatic scaling
• Health monitoring
• Analysis and debugging
• Logging
• No additional charge for Elastic Beanstalk
• Pay only for the underlying resources that are used

AWS Elastic Beanstalk deployments


• It supports web applications written for common platforms
• Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker
• You upload your code
• Elastic Beanstalk automatically handles the deployment
• Deploys on servers such as Apache, NGINX, Passenger, Puma, and Microsoft Internet
Information Services (IIS)

Benefits of Elastic Beanstalk


• Fast and simple to start using
• Developer productivity
• Difficult to outgrow
• Complete resource control

Module 7: Storage

Section 1: Amazon Elastic Block Store (EBS)


• Amazon EBS provides persistent block storage volumes for use with Amazon EC2 instances.
• Persistent storage is any data storage device that retains data after power to that device is shut off
• It is also sometimes called non-volatile storage
EBS enables you to create individual storage volumes and attach them to an Amazon EC2 instance:
• Amazon EBS offers block-level storage
• Volumes are automatically replicated within its Availability Zone
• It can be backed up automatically to Amazon S3 through snapshots
• Uses include –
• Boot volumes and storage for Amazon EC2 instances
• Data storage with a file system
• Database hosts
• Enterprise applications

Amazon EBS features:


• Snapshots –
• Point-in-time snapshots
• Recreate a new volume at any time
• Encryption –
• Encrypted Amazon EBS volumes
• No additional cost
• Elasticity –
• Increase capacity
• Change to different types
Amazon EBS: Volumes, IOPS, and pricing
1. Volumes –
a. Amazon EBS volumes persist independently from the instance
b. All volume types are charged by the amount that is provisioned per month
2. IOPS –
a. General Purpose SSD:
i. Charged by the amount that you provision in GB per month until storage is
released
b. Magnetic:
i. Charged by the number of requests to the volume
c. Provisioned IOPS SSD:
i. Charged by the amount that you provision in IOPS (multiplied by the percentage
of days that you provision for the month)
3. Snapshots –
a. Added cost of Amazon EBS snapshots to Amazon S3 is per GB-month of data stored
4. Data transfer –
a. Inbound data transfer is free
b. Outbound data transfer across Regions incurs charges

Section 2: Amazon Simple Storage Service (Amazon S3)


• S3 is object-level storage, which means that if you want to change a part of a file, you must make
the change and then re-upload the entire modified file.
• Amazon S3 stores data as objects within resources that are called buckets
• Virtually unlimited storage
• Single object is limited to 5 TB
• Designed for ’11 9s’ of durability
• Granular access to bucket and objects
• By default, data in Amazon S3 is stored redundantly across multiple facilities and multiple devices in
each facility
• Objects can be almost any data file, such as images, videos, or server logs
• Low-latency access to the data over HTTP/HTTPS
• You can also encrypt your data in transit and choose to enable server-side encryption of your
objects

Amazon S3 storage classes


• Amazon S3 Standard
• Designed for high durability, availability, and performance object storage for frequently
accessed data. Because it delivers low latency and high throughput, Amazon S3 Standard is
appropriate for a variety of use cases, including cloud applications, dynamic websites, content
distribution, mobile and gaming applications, and big data analytics.
• Amazon S3 Intelligent-Tiering
• Designed to optimize costs by automatically moving data to the most cost-effective access tier,
without performance impact or operational overhead.
• Amazon S3 Standard-Infrequent Access (Amazon S3 Standard-IA)
• Used for data that is accessed less frequently, but requires rapid access when needed. Designed
to provide the high durability, high throughput, and low latency of Amazon S3 Standard, with a
low per-GB storage price and per-GB retrieval fee. This combination of low cost and high
performance makes Amazon S3 Standard-IA good for long-term storage and backups, and as a
data store for disaster recovery files.
• Amazon S3 One Zone-Infrequent Access (Amazon S3 One Zone-IA)
• For data that is accessed less frequently, but requires rapid access when needed. Stores data in
a single Availability Zone and it costs less than Amazon S3 Standard-IA. Zone-IA works well for
customers who want a lower-cost option for infrequently accessed data, but do not require the
availability and resilience of Amazon S3 Standard or Amazon S3 Standard-IA. It is a good choice
for storing secondary backup copies of on-premises data or easily re-creatable data. You can
also use it as cost-effective storage for data that is replicated from another AWS region by using
Amazon S3 Cross-Region Replication.
• Amazon S3 Glacier
• A secure, durable, and low-cost storage class for data archiving. To keep costs low yet suitable
for varying needs, Amazon S3 Glacier provides three retrieval options that range from a few
minutes to hours.
• Amazon S3 Glacier Deep Archive
• The lowest-cost storage class for Amazon S3. It supports long-term retention and digital
preservation for data that might be accessed once or twice in a year. It is designed for
customers – particularly customers in highly regulated industries, such as financial services,
healthcare, and public sectors – that retain datasets for 7-10 years (or more) to meet regulatory
compliance requirements.

Amazon S3 bucket URLs (two styles)


• To upload your data:
• Create a bucket in an AWS Region
• Upload almost any number of objects to the bucket
• Bucket path-style URL endpoint:
• https://s3.region-code-1.amazonaws.com/bucket-name
• Bucket virtual hosted-style URL endpoint:
• https://bucket-name.s3-region-code.amazonaws.com

Access the data anywhere


• You can access Amazon S3 through the console, AWS CLI, or AWS SDK

Common use cases


• Storing application assets
• Static web hosting
• Backup and disaster recovery (DR)
• Staging area for big data
• Many more…
Amazon S3 common scenarios
• Backup and storage – Provide data backup and storage services for others
• Application hosting – Provides services that deploy, install, and manage web applications
• Media hosting – Build a redundant, scalable, and highly available infrastructure that hosts video,
photo, or music uploads and downloads
• Software delivery – Host your software applications that customers can download

Amazon S3 pricing
• Pay only for what you use, including –
• GBs per month
• Transfer OUT to other Regions
• PUT, COPY, POST, LIST, and GET requests
• You do not pay for –
• Transfers IN to Amazon S3
• Transfers OUT from Amazon S3 to Amazon CloudFront or Amazon EC2 in the same Region

Amazon S3: Storage pricing


1. Storage class type –
a. Standard storage is designed for:
i. 11 9s of durability
ii. Four 9s of availability
b. S3 Standard-Infrequent Access (S-IA) is designed for:
i. 11 9s of durability
ii. Three 9s of availability
2. Amount of storage –
a. The number and size of objects
3. Requests –
a. The number and type of requests (GET, PUT, COPY)
b. Type of requests:
i. Different rates for GET requests than other requests
4. Data Transfer –
a. Pricing is based on the amount of data that is transferred out of the Amazon S3 Region
i. Data transfer in is free, but you incur charges for data that is transferred out

Section 3: Amazon Elastic File System (Amazon EFS)


• Provides simple, scalable, elastic file storage for use with AWS services and on-premises resources
• Offers a simple interface that enables you to create and configure file systems quickly and easily

Amazon EFS features


• File storage in the AWS Cloud
• Works well for big data and analytics, media processing workflows, content management, web
serving, and home directories
• Petabyte-scale, low-latency file system
• Shared storage
• Elastic capacity
• Supports Network File System (NFS) versions 4.0 and 4.1 (NFSv4)
• Compatible with all Linux-based AMIs for Amazon EC2

Amazon EFS implementation


1. Create your EC2 resources and launch your EC2 instance
2. Creation your EFS file system
3. Create your mount targets in the appropriate subnets
4. Connect your EC2 instances to the mount targets
5. Verify the resources and protection of your AWS account

Amazon EFS resources


File system
• Mount target
• Subnet ID
• Security groups
• One or more per file system
• Create in a VPC subnet
• One per Availability Zone
• Must be in the same VPC
• Tags
• Key-value pairs
Section 4: Amazon S3 Glacier

Amazon S3 Glacier is a data archiving service that is designed for security, durability, and an extremely
low cost:
• Designed to provide 11 9s of durability for objects
• It supports the encryption of data in transit and at rest through Secure Sockets Layer (SSL) or
Transport Layer Security (TLS)
• The vault Lock feature enforces compliance through a policy
• Extremely low-cost design works well for long-term archiving
• Provides three options for access to archives – expedited, standard, and bulk – retrieval times
range from a few minutes to several hours
Amazon S3 Glacier
• Storage service for low-cost data archiving and long-term backup
• You can configure lifecycle archiving of Amazon S3 content to Amazon S3 Glacier
• Retrieval options –
• Standard: 3-5 hours
• Bulk: 5-12 hours
• Expedited: 1-5 minutes
Amazon S3 Glacier use cases
• Media asset archiving
• Healthcare information archiving
• Regulatory and compliance archiving
• Scientific data archiving
• Digital preservation
• Magnetic tape replacement

Using Amazon S3 Glacier


• To store and access data in S3 Glacier, you can use the AWS Management Console. However, only a
few operations – such as creating and deleting vaults, and creating and managing archive policies –
are available in the console.
• For almost all other operations and interactions with S3 Glacier, you must use either the Amazon S3
Glacier REST APIs, the AWS Java or .NET SDKs, or the AWS CLI.
Server-side encryption
• Server-side encryption with Amazon S3-managed encryption keys (SSE-S3) employs strong multi-
factor encryption. Amazon S3 encrypts each object with a unique key. As an additional safeguard, it
encrypts the key with a master key that it regularly rotates. Amazon S3 server-side encryption uses
one of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256), to
encrypt your data.
• Using server-side encryption with Customer-provided Encryption Keys (SSE-C) enables you to set
your own encryption keys. You include the encryption key as part of your request, and Amazon S3
manages both encryption (as it writes to disks), and decryption (when you access your objects).
• Using server-side encryption with AWS Key Management Service (AWS KMS) is a service that
combines secure, highly available hardware and software to provide a key management system that
is scaled for the cloud. AWS KMS uses Customer Master Keys (CMKs) to encrypt your Amazon S3
objects. You use AWS KMS through the Encryption Keys section in the IAM console. You can also
access AWS KMS through the API to centrally create encryption keys, define the policies that control
how keys can be used, and audit key usage to prove that they are being used correctly. You can use
these keys to protect your data in Amazon S3 buckets.

Security with Amazon S3 Glacier


• Control access with IAM
• Amazon S3 Glacier encrypts your data with AES-256
• Amazon S3 Glacier manages your keys for you

Module 8: Databases

Section 1: Amazon Relational Database Service

Unmanaged versus managed services


• Unmanaged
• Scaling, fault tolerance, and availability are managed by you
• Managed
• Scaling, fault tolerance, and availability are typically built into the service
Challenges of relational databases
• Server maintenance and energy footprint
• Software installation and patches
• Database backups and high availability
• Limits on scalability
• Data security
• OS installation and patches

Amazon RDS
• A managed service that sets up and operates a relational database in the cloud
• AWS provides a service that sets up, operates, and scales the relational database without any
ongoing administration. Amazon RDS provides cost-efficient and resizable capacity, while
automating time-consuming administrative tasks.

Managed service responsibilities:


• You manage
• Application optimization
• AWS manages:
• OS installation and patches
• Database software installation and patches
• Database backups
• High availability
• Scaling
• Power and racking and stacking servers
• Server maintenance

Amazon RDS DB instances


• DB Instance Class
• CPU
• Memory
• Network performance
• DB Instance Storage
• Magnetic
• General Purpose (SSD)
• Provisioned IOPS
• DB engines: MySQL, Amazon Aurora, Microsoft SQL Server, PostgreSQL, MariaDB, Oracle

Amazon RDS read replicas


• Features
• Offers asynchronous replication
• Can be promoted to master if needed
• Functionality
• Use for read-heavy database workloads
• Offload read queries

Use cases
• Web and mobile applications
• High throughput
• Massive storage scalability
• High availability
• Ecommerce applications
• Low-cost database
• Data security
• Fully managed solution
• Mobile and online games
• Rapidly grow capacity
• Automatic scaling
• Database monitoring

When to use Amazon RDS


• Use Amazon RDS when your application requires:
• Complex transactions or complex queries
• A medium to high query or write rate – Up to 30,000 IOPS (15,000 reads + 15,000 writes)
• No more than a single worker node or shard
• High durability
• Do not use Amazon RDS when your application requires:
• Massive read/write rates (for example, 150,000 writes/second)
• Sharding due to high data size or throughput demands
• Simple GET or PUT requests and queries that a NoSQL database can handle
• Relational database management system (RDBMS) customization

Amazon RDS: Clock-hour billing and database characteristics


• Clock-hour billing –
• Resources incur charges when running
• Database characteristics –
• Physical capacity of database:
• Engine
• Size
• Memory class

Amazon RDS: DB purchase type and multiple DB instances


• DB purchase type –
• On-Demand Instances
• Compute capacity by the hour
• Reserved Instances
• Low, one-time, upfront payment for database instances that are reserved with a 1-year or 3-
year term
• Number of DB instances –
• Provision multiple DB instances to handle peak loads
Amazon RDS: Storage
• Provisioned storage –
• No charge
• Backup storage of up to 100 percent of database storage for an active database
• Charge (GB/month)
• Backup storage for terminated DB instances
• Additional storage –
• Charge (GB/month)
• Backup storage in addition to provisioned storage

Amazon RDS: Deployment type and data transfer


• Requests –
• The number of input and output requests that are made to the database
• Deployment type – Storage and I/O charges vary, depending on whether you deploy to –
• Single Availability Zone
• Multiple Availability Zones
• Data transfer –
• No charge for inbound data transfer
• Tiered charges for outbound data transfer

Section 2: Amazon DynamoDB


• A relational database (RDB) works with structured data that is organized by tables, records, and
columns. RDBs establish a well-defined relationship between database tables. RDBs use structured
query language (SQL), which is a standard user application that provides a programming interface
for database interaction. Relational databases might have difficulties scaling out horizontally or
working with semi-structured data, and might also require many joins for normalized data.
• A non-relational database is any database that does not follow the relational model that is provided
by traditional relational database management systems (RDBMS). Non-relational databases have
grown in popularity because they were designed to overcome the limitations of relational databases
for handling the demands of variable structured data. Non-relational databases scale out
horizontally, and they can work with unstructured and semi-structured data.

What is Amazon DynamoDB?


• Fast and flexible NoSQL database service for any scale
• NoSQL database tables
• Virtually unlimited storage
• Items can have differing attributes
• Low-latency queries
• Scalable read/write throughput

Amazon DynamoDB core components


• Tables, items, and attributes
• DynamoDB supports two different kinds of primary keys: Partition key and partition and sort key
Section 3: Amazon Redshift
• A fast, fully managed data warehouse that makes it simple and cost-effective to analyze all your
data by using standard SQL and your existing business intelligence (BI) tools.

Automation and scaling


• It is straightforward to automate most of the common administrative tasks to manage, monitor, and
scale your Amazon Redshift cluster – which enables you to focus on your data and your business.
• Security is built in and it is designed to provide strong encryption of your data both at rest and in
transit

Compatibility
• Amazon Redshift is compatible with the tools that you already known and use. It supports standard
SQL. It also provides high-performance Java Database Connectivity (JDBC) and Open Database
Connectivity (ODBC) connectors, which enable you to use the SQL clients and BI tools of your choice.

Amazon Redshift use cases


• Enterprise data warehouse (EDW)
• Migrate at a pace that customers are comfortable with
• Experiment without large upfront cost or commitment
• Respond faster to business needs
• Big data
• Low price point for small customers
• Managed service for easy of deployment and maintenance
• Focus more on data and less on database management
• Software as a service (SaaS)
• Scale the data warehouse capacity as demand grows
• Add analytic functionality to applications
• Reduce hardware and software costs

Section 4: Amazon Aurora


• Enterprise-class relational database
• Compatible with MySQL or PostgreSQL
• Automate time-consuming tasks (such as provisioning, patching, backup, recovery, failure detection,
and repair)
• Service benefits: Fast and reliable, simple, compatible, pay-as-you-go, managed service

High availability
• Amazon Aurora stores multiple copies of your data across multiple Availability Zones with
continuous backups to Amazon S3. Can use up to 15 read replicas to reduce the possibility of losing
your data. Designed for instant crash recovery if your primary database becomes unhealthy.

The right tool for the right job


• Amazon RDS – Enterprise-class relational database
• Amazon DynamoDB – Fast and flexible NoSQL database service for any scale
• Databases on Amazon EC2 – OS access or application features that are not supported by AWS
database services
• AWS purpose-built database services – Specific case-driven requirements (machine learning, data
warehouse, graphs)

Module 9: Cloud Architecture

Section 1: AWS Well-Architected Framework

Cloud architects:
• Engage with decision makers to identify the business goal and the capabilities that need
improvement
• Ensure alignment between technology deliverables of a solution and the business goals
• Work with delivery teams that are implementing the solution to ensure that the technology features
are appropriate

What is the AWS Well-Architected Framework?


• A guide for designing infrastructures that are:
• Secure
• High-performing
• Resilient
• Efficient
• A consistent approach to evaluating and implementing cloud architectures
• A way to provide best practices that were developed through lessons learned by reviewing customer
architectures

Pillars of the AWS Well-Architected Framework


• Operational excellence
• Security
• Reliability
• Performance efficiency
• Cost optimization

Operational Excellence pillar


• Focus
• Run and monitor systems to deliver business value, and to continually improve supporting
processes and procedures
• Key topics
• Managing and automating changes
• Responding to events
• Defining standards to successfully manage daily operations

Operational excellence design principles


• Perform operations as code
• Annotate documentation
• Make frequent, small, reversible changes
• Refine operations procedures frequently
• Anticipate failure
• Learn from all operational events and failures

Operation excellence questions


• Prepare
• How do you determine what your priorities are?
• How do you design your workload so that you can understand its state?
• How do you reduce defects, ease remediation, and improve flow into production?
• How do you mitigate deployment risk?
• How do you know that you are ready to support a workload?
• Operate
• How do you understand the health of your workload?
• How do you understand the health of your operations?
• How do you manage workload and operations events?
• Evolve
• How do you evolve operations?
Security pillar
• Focus
• Protect information, systems, and assets while delivering business value through risk
assessments and mitigation strategies
• Key topics
• Identifying and managing who can do what
• Establishing controls to detect security events
• Protecting systems and services
• Protecting confidentiality and integrity of data

Security design principles


• Implement a strong identity foundation
• Enable traceability
• Apply security at all layers
• Automate security best practices
• Protect data in transit and at rest
• Keep people away from data
• Prepare for security events

Security questions
• Identity and access management
• How do you manage credentials and authentication?
• How do you control human access?
• How do you control programmatic access?
• Detective controls
• How do you detect and investigate security events?
• How do you defend against emerging security threats?
• Infrastructure protection
• How do you protect your networks?
• How do you protect your compute resources?
• Data protection
• How do you classify your data?
• How do you protect your data at rest?
• How do you protect your data in transit?
• Incident response
• How do you respond to an incident?

Reliability pillar
• Focus
• Prevent and quickly recover from failures to meet business and customer demand
• Key topics
• Setting up
• Cross-project requirements
• Recovery planning
• Handling change

Reliability design principles


• Test recovery procedures
• Automatically recover from failure
• Scale horizontally to increase aggregate system availability
• Stop guessing capacity
• Manage change in automation

Reliability questions
• Foundations
• How do you manage service limits?
• How do you manage your network topology?
• Change management
• How does your system adapt to changes in demand?
• How do you monitor your resources?
• How do you implement change?
• Failure management
• How do you back up data?
• How does your system withstand component failure?
• How do you test resilience?
• How do you plan for disaster recovery?

Performance Efficiency pillar


• Focus
• Use IT and computing resources efficiently to meet system requirements and to maintain that
efficiency as demand changes and technologies evolve
• Key topics
• Selecting the right resource types and sizes based on workload requirements
• Monitoring performance
• Making informed decisions to maintain efficiency as business needs evolve

Performance efficiency design principles


• Democratize advanced technologies
• Go global in minutes
• Use serverless architectures
• Experiment more often
• Have mechanical sympathy
Performance efficiency questions
• Selection
• How do you select the best performing architecture?
• How do you select your compute solution?
• How do you select your storage solution?
• How do you select your database solution?
• How do you select your networking solution?
• Review
• How do you evolve your workload to take advantage of new releases?
• Monitoring
• How do you monitor you resources to ensure they are performing as expected?
• Tradeoffs
• How do you use tradeoffs to improve performance?

Cost Optimization pillar


• Focus
• Run systems to deliver business value at the lowest price point
• Key topics
• Understanding and controlling when money is spent
• Selected the most appropriate and right number of resource types
• Analyzing spending over time
• Scaling to meet business needs without overspending

Cost optimization design principles


• Adopt a consumption model
• Measure overall efficiency
• Stop spending money on data center operations
• Analyze and attribute expenditure
• Use managed and application-level services to reduce cost of ownership

Cost optimization questions


• Expenditure awareness
• How do you govern usage?
• How do you monitor usage and cost?
• How do you decommission resources?
• Cost-effective resources
• How do you evaluate cost when you select services?
• How do you meet cost targets when you select resource type and size?
• How do you use pricing models to reduce cost?
• How do you plan for data transfer changes?
• Matching supply and demand
• How do you match supply of resources with demand?
• Optimizing over time
• How do you evaluate new services?

The AWS Well-Architected Tool


• Helps you review the state of your workloads and compares them to the latest AWS architectural
best practices
• Gives you access to knowledge and best practices used by AWS architects, whenever you need it
• Delivers an action plan with step-by-step guidance on how to build better workloads for the cloud
• Provides a consistent process for you to review and measure your cloud architects

Section 2: Reliability and availability

“Everything fails, all the time.” – Werner Vogels, CTO, Amazon

Reliability
• A measure of your system’s ability to provide functionality when desired by the user
• System includes all system components: hardware, firmware, and software
• Probability that your entire system will function as intended for a specific period
• Mean time between failures (MTBF) = total time in service/number of failures

Availability
• Normal operation time / total time
• A percentage of uptime (for example, 99.9 percent) over time (for example, 1 year)
• Number of 9s – Five 9s means 99.999 percent availability
High availability
• System can withstand some measure of degradation while still remaining available
• Downtime is minimized
• Minimal human intervention is required

Factors that influence availability


• Fault tolerance
• The built-in redundancy of an application’s components and its ability to remain operational
• Scalability
• The ability of an application to accommodate increases in capacity needs without changing
design
• Recoverability
• The process, policies, and procedures that are related to restoring service after a catastrophic
event

Section 3: AWS Trusted Advisor

AWS Trusted Advisor


• Online tool that provides real-time guidance to help you provision your resources following AWS
best practices
• Looks at your entire AWS environment and gives you real-time recommendations in five categories
• Cost Optimization
• Performance
• Security
• Fault Tolerance
• Service Limits

Example:

Module 10: Automatic Scaling and Monitoring

Section 1: Elastic Load Balancing

Elastic Load Balancing


• Distributes incoming application or network traffic across multiple targets in a single Availability
Zone or across multiple Availability Zones
• Scales your load balancer as traffic to your application changes over time

Types of load balancers


• Application Load Balancer
• Load balancing of HTTP and HTTPS traffic
• Routes traffic to targets based on content of request
• Provides advanced request routing targeted at the delivery of modern application architectures,
including microservices and containers
• Operates at the application layer (OSI model layer 7)
• Network Load Balancer
• Load balancing of TCP, UDP, and TLS traffic where extreme performance is required
• Routes traffic to targets based on IP protocol data
• Can handle millions of requests per second while maintaining ultra-low latencies
• Is optimized to handle sudden and volatile traffic patterns
• Operates at the transport layer (OSI model layer 4)
• Classic Load Balancer (Previous Generation)
• Load balancing of HTTP, HTTPS, TCP, and SSL traffic
• Load balancing across multiple EC2 instances
• Operates at both the application and transport layers

Elastic Load Balancing use cases


• Highly available and fault-tolerant applications
• Containerized applications
• Elasticity and scalability
• Virtual private cloud (VPC)
• Hybrid environments
• Invoke Lambda functions over HTTP(S)

Load balancer monitoring


• Amazon CloudWatch metrics –
• Used to verify that the system is performing as expected and creates an alarm to initiate an
action if a metric goes outside an acceptable range
• Access logs –
• Capture detailed information about requests sent to your load balancer
• AWS CloudTrail logs –
• Capture the who, what, when, and where of API interactions in AWS services
Section 2: Amazon CloudWatch

Monitoring AWS resources


• How do you know when you should launch more Amazon EC2 instances?
• Is your application’s performance or availability being affected by a lack of sufficient capacity?
• How much of your infrastructure is actually being used?

Amazon CloudWatch
• Monitors –
• AWS resources
• Applications that run on AWS
• Collects and tracks –
• Standard metrics
• Custom metrics
• Alarms –
• Send notifications to an Amazon SNS topic
• Perform Amazon EC2 Auto Scaling or Amazon EC2 actions
• Events –
• Define rules to match changes in AWS environment and route these events to one or more
target functions or streams for processing

CloudWatch alarms
• Create alarms based on –
• Static threshold
• Anomaly detection
• Metric math expression
• Specify –
• Namespace
• Metric
• Statistic
• Period
• Conditions
• Additional configuration
• Actions

Section 3: Amazon EC2 Auto Scaling

• Helps you maintain application availability


• Enables you to automatically add or remove EC2 instances according to conditions that you define
• Detects impaired EC2 instances and unhealthy applications, and replaces the instances without your
intervention
• Provides several scaling options – Manual, scheduled, dynamic, or on-demand, and predictive
Auto Scaling groups
• An Auto Scaling group is a collection of EC2 instances that are treated as a logical grouping for the
purposes of automatic scaling and management
AWS Auto Scaling
• Monitors your applications and automatically adjusts capacity to maintain steady, predictable
performance at the lowest possible cost
• Provides a simple, powerful user interface that enables you to build scaling plans for resources,
including –
• Amazon EC2 instances and Spot Fleets
• Amazon Elastic Container Service (ECS) tasks
• Amazon DynamoDB tables and indexes
• Amazon Aurora Replicas

You might also like