CCL Merged PDF
CCL Merged PDF
CCL Merged PDF
Essential Characteristics:
1)Broad network access: Capabilities are available over the network and
accessible through standard mechanisms, supporting diverse client platforms.
3)Public cloud: Cloud infrastructure is provisioned for open use by the general
public, owned, managed, and operated by a business, academic, or government
organization, on the premises of the cloud provider.
1. Frontend
2. Backend
2. Backend :
Backend refers to the cloud itself which is used by the service provider. It
contains the resources as well as manages the resources and provides security
mechanisms. Along with this, it includes huge storage, virtual applications,
virtual machines, traffic control mechanisms, deployment models, etc.
1. Application –
Application in backend refers to a software or platform to which a
client accesses. Means it provides the service in the backend as per the
client requirement.
2. Service –
Service in backend refers to the major three types of cloud based
services like SaaS, PaaS and IaaS. Also manages which type of service
the user accesses.
3. Runtime Cloud-
Runtime cloud in backend provides the execution and Runtime
platform/environment to the Virtual machine.
4. Storage –
Storage in the backend provides flexible and scalable storage service
and management of stored data.
5. Infrastructure –
Cloud Infrastructure in backend refers to the hardware and software
components of cloud like it includes servers, storage, network devices,
virtualization software etc.
6. Management –
Management in backend refers to management of backend
components like application, service, runtime cloud, storage,
infrastructure, and other security mechanisms etc.
7. Security –
Security in the backend refers to implementation of different security
mechanisms in the backend for secure cloud resources, systems, files,
and infrastructure to end-users.
8. Internet –
Internet connection acts as the medium or a bridge between frontend
and backend and establishes the interaction and communication
between frontend and backend.
9. Database– Database in backend refers to a database for storing
structured data, such as SQL and NOSQL databases. Examples of
Database services include Amazon RDS, Microsoft Azure SQL
database and Google CLoud SQL.
10. Networking– Networking in backend services that provide
networking infrastructure for applications in the cloud, such as load
balancing, DNS and virtual private networks.
11. Analytics– Analytics in backend service that provides analytics
capabilities for data in the cloud, such as warehousing, business
intelligence and machine learning.
● Scalability.
Conclusion:-
PRN : 121A1088
Batch : D-1
Theory:-
VirtualBox is a popular open-source virtualization software that allows you to create and
manage virtual machines on your computer. Here are the basic steps to create a virtual
machine using VirtualBox:
T
h
e
p
r
i
n
c
iple of virtualization was explored, leading to the creation of a virtual instance running
Ubuntu. Within this virtual environment, code execution was conducted.
Name : Rishit Ravichandran
PRN : 121A1088
Batch : D1
Experiment No. 3
Theory:-
VirtualBox is a popular open-source virtualization software that allows you to create and
manage virtual machines on your computer. Here are the basic steps to create a virtual
machine using VirtualBox:
PRN : 121A1088
Batch : D-1
This lab provides you with a basic overview of launching, resizing, managing, and
monitoring an Amazon EC2 instance.
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides
resizable compute capacity in the cloud. It is designed to make web-scale cloud
computing easier for developers.
Amazon EC2's simple web service interface allows you to obtain and configure capacity
with minimal friction. It provides you with complete control of your computing resources
and lets you run on Amazon's proven computing environment. Amazon EC2 reduces
the time required to obtain and boot new server instances to minutes, allowing you to
quickly scale capacity, both up and down, as your computing requirements change.
Amazon EC2 changes the economics of computing by allowing you to pay only for
capacity that you actually use. Amazon EC2 provides developers the tools to build
failure resilient applications and isolate themselves from common failure scenarios.
4. In the AWS Management Console choose Services, choose Compute and then
choose EC2.
Note: Verify that your EC2 console is currently managing resources in the N. Virginia
(us-east-1) region. You can verify this by looking at the drop down menu at the top of
the screen, to the left of your username. If it does not already indicate N. Virginia,
choose the N. Virginia region from the region menu before proceeding to the next step.
15.Scroll to the bottom of the page and then copy and paste the code shown below
into the User data box:
16.#!/bin/bash
dnf install -y httpd
systemctl enable httpd
systemctl start httpd
echo '<html><h1>Hello From Your Web Server!</h1></html>' >
/var/www/html/index.html
17. When you launch an instance, you can pass user data to the instance that can
be used to perform automated installation and configuration tasks after the
instance starts.
Your instance is running Amazon Linux 2023. The shell script you have specified
will run as the root guest OS user when the instance starts. The script will: ○
Install an Apache web server (httpd)
○ Configure the web server to automatically start on boot
○ Run the Web server once it has finished installing
○ Create a simple web page
Step 8: Launch the instance
18.At the bottom of the Summary panel choose Launch instance
You will see a Success message.
19.Choose View all instances
a. In the Instances list, select Web Server.
b. Review the information displayed in the Details tab. It includes information
about the instance type, security settings and network settings.
The instance is assigned a Public IPv4 DNS that you can use to contact
the instance from the Internet.
To view more information, drag the window divider upwards.
At first, the instance will appear in a Pending state, which means it is
being launched. It will then change to Initializing, and finally to Running. 20.
21.Wait for your instance to display the following:
a. Instance State: Running
b. Status Checks: 2/2 checks passed
22.Scroll through the output and note that the HTTP package was installed from the
user data that you added when you created the instance.
30.Open a new tab in your web browser, paste the IP address you just copied, then
press Enter.
Question: Are you able to access your web server? Why not?
You are not currently able to access your web server because the security group
is not permitting inbound traffic on port 80, which is used for HTTP web requests.
This is a demonstration of using a security group as a firewall to restrict the
network traffic that is allowed in and out of an instance.
To correct this, you will now update the security group to permit web traffic on
port 80.
31.Keep the browser tab open, but return to the EC2 Console tab.
35.Choose Edit inbound rules, select Add rule and then configure:
○ Type: HTTP
○ Source: Anywhere-IPv4
○ Choose Save rules
36.Return to the web server tab that you previously opened and refresh the page.
You should see the message Hello From Your Web Server!
When you stop an instance, it is shut down. There is no runtime charge for a stopped
EC2 instance, but the storage charge for attached Amazon EBS volumes remains.
37.On the EC2 Management Console, in the left navigation pane, choose
Instances and then select the Web Server instance.
45.Change the size to: 10 NOTE: You may be restricted from creating Amazon EBS
volumes larger than 10 GB in this lab.
46.Choose Modify
47.Choose Modify again to confirm and increase the size of the volume.
51.In the AWS Management Console, in the search box next to Services, search for
and choose Service Quotas
52.Choose AWS services from the navigation menu and then in the AWS services
Find services search bar, search for ec2 and choose Amazon Elastic
Compute Cloud (Amazon EC2).
53.In the Find quotas search bar, search for running on-demand, but do not make
a selection. Instead, observe the filtered list of service quotas that match the
criteria.
Notice that there are limits on the number and types of instances that can run in
a region. For example, there is a limit on the number of Running On-Demand
Standard... instances that you can launch in this region. When launching
instances, the request must not cause your usage to exceed the instance limits
currently defined in that region.
If you are the AWS account owner, you can request an increase for many
of these limits.
54.In the AWS Management Console, in the search box next to Services, search for
and choose EC2 to return to the EC2 console.
55.In left navigation pane, choose Instances.
56.Select the Web Server instance and in the Instance state menu, select Stop
instance.
58.In the Actions menu, select Instance settings Change stop protection.
60.Choose Save
You can now stop the instance.
61.Select the Web Server instance again and in the Instance state menu, select
Stop instance.
62.Choose Stop
Congratulations! You have successfully tested stop protection and stopped
your instance.
Experiment No.5
4. In the console, in the search box to the right of to *Services*, search for and
choose *Elastic Beanstalk*.
A page titled Environments should open, and it should show a table that lists
the details for an existing Elastic Beanstalk application.
Note: If the status in the Health column is not Ok, it has not finished starting yet.
Wait a few moments, and it should change to Ok.
5. Under the Environment name column, choose the name of the environment.
The Dashboard page for your Elastic Beanstalk environment opens.
6. Notice that the page shows that the health of your application is Ok.
The Elastic Beanstalk environment is ready to host an application. However, it
does not yet have running code.
○
○ Return to the Elastic Beanstalk console.
In the next step, you will deploy code in your Elastic Beanstalk
environment.
8. https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/samples/tomcat.zip
10.Choose Choose File, then navigate to and open the tomcat.zip file that you just
downloaded.
14.In the Networking, database, and tags panel, no configuration details display,
because the environment does not include a database.
15.In the Networking, database, and tags row, choose Edit.
Note that you could easily enable a database to this environment if you wanted
to: you only need to set a few basic configurations and choose Apply. (However,
for the purposes of this activity, you do not need to add a database.)
○ Choose Cancel at the bottom of the screen.
16.
17.In the left panel under Environment, choose Monitoring.
Browse through the charts to see the kinds of information that are available to
you.
17.In the console, in the search box to the right of to *Services*, search for and
choose EC2
18.Choose Instances.
Note that two instances that support your web application are running (they both
contain samp in their names).
19.If you want to continue exploring the Amazon EC2 service resources that were
created by Elastic Beanstalk, feel free to explore them. You will find:
○ A security group with port 80 open
○ A load balancer that both instances belong to
○ An Auto Scaling group that runs from two to six instances, depending on
the network load
20.Though Elastic Beanstalk created these resources for you, you still have access
to them.
17.In the console, in the search box to the right of to *Services*, search for and
choose EC2
18.Choose Instances.
Note that two instances that support your web application are running (they both
contain samp in their names).
19.If you want to continue exploring the Amazon EC2 service resources that were
created by Elastic Beanstalk, feel free to explore them. You will find: ○ A
security group with port 80 open
○ A load balancer that both instances belong to
○ An Auto Scaling group that runs from two to six instances, depending on
the network load
20.Though Elastic Beanstalk created these resources for you, you still have access
to them.
Lab Overview
This lab focuses on Amazon Elastic Block Store (Amazon EBS), a key underlying
storage mechanism for Amazon EC2 instances. In this lab, you will learn how to create
anAmazonEBSvolume,attachittoaninstance,applyafilesystemtothevolume,and then take
a snapshot backup.
Topics covered
Duration
In this lab environment, access to AWS services and service actions might be restricted
totheonesthatareneededtocompletethelabinstructions.Youmightencountererrors if you
attempt to access other services or perform actions beyond the ones that are described
in this lab.
When used as a boot partition, Amazon EC2 instances can be stopped and
subsequentlyrestarted,enablingyoutopayonlyforthestorageresourcesusedwhile
maintaining your instance's state. Amazon EBS volumes offer greatly improved
durabilityoverlocalAmazonEC2instancestoresbecauseAmazonEBSvolumesare
automatically replicated on the backend (in a single Availability Zone).
For those wanting even more durability, Amazon EBS provides the ability to create
point-in-time consistent snapshots of your volumes that are then stored in Amazon
Simple Storage Service (Amazon S3) and automatically replicated across multiple
AvailabilityZones.ThesesnapshotscanbeusedasthestartingpointfornewAmazon EBS
volumes and can protect your data for long-term durability. You can also easily share
these snapshots with co-workers and other AWS developers.
ThislabguideexplainsbasicconceptsofAmazonEBSinastep-by-stepfashion. However, it
can only give a brief overview of Amazon EBS concepts. For further information, see the
Amazon EBS documentation.
AmazonEBSVolumeFeatures
Amazon EBS volumes deliver the following features:
● Persistentstorage:VolumelifetimeisindependentofanyparticularAmazon EC2
instance.
● Generalpurpose:AmazonEBSvolumesareraw,unformattedblockdevices that
can be used from any operating system.
● Highperformance:AmazonEBSvolumesareequaltoorbetterthanlocal
Amazon EC2 drives.
● Highreliability:AmazonEBSvolumeshavebuilt-inredundancywithinan
Availability Zone.
● Designedforresiliency:TheAFR(AnnualFailureRate)ofAmazonEBSis
between 0.1% and 1%.
● Variablesize:Volumesizesrangefrom1GBto16TB.
● Easytouse:AmazonEBSvolumescanbeeasilycreated,attached,backedup,
restored, and deleted.
2. ToconnecttotheAWSManagementConsole,choosetheAWSlinkinthe
upper-left corner.
○ A new browser tab opens and connects you to the console.
Tip:Ifanewbrowsertabdoesnotopen,abanneroriconisusuallyatthe top of
your browser with the message that your browser is preventing the site
from opening pop-up windows. Choose the banner or icon, and then
choose Allow pop-ups.
3. ArrangetheAWSManagementConsoletabsothatitdisplaysalongsidethese
instructions.Ideally,youwillbeabletoseebothbrowsertabsatthesametime, to
make it easier to follow the lab steps.
Tip:Thescriptthatchecksyouworksmayonlyawardpointsifyounameresourcesand set
configurations as specified. In particular, values in these instructions that appear in This
Formatshould be entered exactly as documented (case-sensitive).
Task1:CreateaNewEBS Volume
Inthistask,youwillcreateandattachanAmazonEBSvolumetoanewAmazonEC2 instance.
4. IntheAWSManagementConsole,inthesearchboxnexttoServices,searchfor and
select EC2.
8. ChooseCreatevolumethen configure:
○ VolumeType:GeneralPurposeSSD(gp2)
○ Size(GiB):1.NOTE:Youmayberestrictedfromcreatinglarge volumes.
○ AvailabilityZone:SelectthesameavailabilityzoneasyourEC2
instance.
○ ChooseAddtag
○ IntheTagEditor,enter:
■ Key:Name
■ Value:MyVolume
9. ChooseCreateVolume
Yournewvolumewillappearinthelist,andwillmovefromtheCreatingstateto the
Available state. You may need to choose refresh to see your new volume.
Task2:AttachtheVolumetoan
Instance
In this task you will attach the new EBS volume to the Amazon EC2 instance.
12. Choose the Instance field, then select the Lab instance.
NotethattheDevicenameissetto/dev/sdf.Noticealsothemessagedisplayed that
"Newer Linux kernels may rename your devices to /dev/xvdf through
/dev/xvdpinternally,evenwhenthedevicenameenteredhere(andshowninthe details)
is /dev/sdf through /dev/sdp."
13. ChooseAttachvolume
The volume state is now In-use.
Task3:ConnecttoYourAmazon
EC2 Instance
Inthistask,youwillconnecttotheEC2instanceusingEC2InstanceConnectwhich provides
access to a terminal in the browser.
Viewthestorageavailableonyourinstance: Run
the following command:
df-h
Filesystem
SizeUsedAvailUse%Mountedondev
tmpfs 4.0M 04.0M0% /dev
tmpfs 475M
0475M0%/dev/shmtmpfs
190M2.8M188M2% /run
/dev/xvda1 8.0G1.6G6.5G20%
/tmpfs 475M 0475M0%/tmp
tmpfs 95M 095M0%/run/user/1000
The output shows the original 8GB /dev/xvda1 disk volume mounted at /which
indicatesthatitistherootvolume.IthoststheLinuxoperatingsystemoftheEC2 instance.
The 1GB other volume that you attached to the Lab instance is not listed,
becauseyouhavenotyetcreatedafilesystemonitormountedthedisk.Those actions
are necessary so that the Linux operating system can make use of the
newstoragespace.Youwilltakethoseactions next.
sudomkfs-text3/dev/sdf
Theoutputshouldindicatethatanewfilesystemwascreatedontheattached volume.
Create a directory for mounting the new storage volume:
sudomkdir/mnt/data-store
sudomount/dev/sdf/mnt/data-store
ToconfiguretheLinuxinstancetomountthisvolumewhenevertheinstanceis started,
you will need to add a line to /etc/fstab. Run the command below to accomplish
that:
echo"/dev/sdf/mnt/data-storeext3defaults,noatime12"|sudotee-a/etc/fstab
cat/etc/fstab
Viewtheavailablestorageagain:
df-h
Filesystem
SizeUsedAvailUse%Mountedondev
tmpfs 484M 0484M0% /dev
tmpfs 492M
0492M0%/dev/shmtmpfs
492M460K491M1% /run
tmpfs 492M 0492M0% /sys/fs/cgroup
/dev/xvda1 8.0G1.5G6.6G19% /tmpfs
99M 099M0%/run/user/0
tmpfs 99M 099M0%/run/user/1000
/dev/xvdf 976M1.3M924M1%/mnt/data-store
Noticethelastline.Theoutputnowlists/dev/xvdfwhichisthenewmounted volume.
On your mounted volume, create a file and add some text to it.
sudosh-c"echosometexthasbeenwritten>/mnt/data-store/file.txt"
Verifythatthetexthasbeenwrittentoyourvolume.
cat/mnt/data-store/file.txt
LeavetheEC2InstanceConnectsessionrunning.Youwillreturntoitlater
in this lab.
Task5:CreateanAmazonEBS
Snapshot
In this task, you will create a snapshot of your EBS volume.
Youcancreateanynumberofpoint-in-time,consistentsnapshotsfromAmazonEBS volumes
at any time. Amazon EBS snapshots are stored in Amazon S3 with high
durability.NewAmazonEBSvolumescanbecreatedoutofsnapshotsforcloningor restoring
backups. Amazon EBS snapshots can also be easily shared among AWS users or
copied over AWS regions.
28. ChooseAddtagthenconfigure:
○ Key:Name
○ Value:MySnapshot
○ ChooseCreatesnapshot
29. In the left navigation pane, choose Snapshots.
Yoursnapshotisdisplayed.ThestatuswillfirsthaveastateofPending,which means
that the snapshot is being created. It will then change to a state of Completed.
Note:Onlyusedstorageblocksarecopiedtosnapshots,soemptyblocksdonot occupy
any snapshot storage space.
30. InyourEC2InstanceConnectsession,deletethefilethatyoucreatedonyour
volume.
31. sudorm/mnt/data-store/file.txt
32.
33. Verifythatthefilehasbeen deleted.
34. ls/mnt/data-store/
35. Yourfilehasbeendeleted.
Task6:RestoretheAmazon
EBS Snapshot
Ifyoueverwishtoretrievedatastoredinasnapshot,youcanRestorethesnapshotto a new
EBS volume.
CreateaVolumeUsingYourSnapshot
32. In the EC2 console, select My Snapshot.
35. ChooseAddtagthenconfigure:
○ Key:Name
○ Value:RestoredVolume
○ ChooseCreatevolume
36. Note:Whenrestoringasnapshottoanewvolume,youcanalsomodifythe
configuration, such as changing the volume type, size or Availability Zone.
AttachtheRestoredVolumetoYourEC2 Instance
36. In the left navigation pane, choose Volumes.
39. Choose the Instance field, then select the Lab instance that appears.
NotethattheDevicefieldissetto/dev/sdg.Youwillusethisdeviceidentifierina later
task.
40. ChooseAttachvolume
The volume state is now in-use.
Mount the Restored Volume
AIM:-To study and Implement Database as a Service on SQL/NOSQL databases like AWS
RDS, AZURE SQL/ MongoDB Lab/ Firebase.
Lab 5: Build Your DB Server and Interact With Your DB Using an App
This lab is designed to reinforce the concept of leveraging an AWS-managed database instance
for solving relational database needs.
Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale
a relational database in the cloud. It provides cost-efficient and resizable capacity while
managing time-consuming database administration tasks, which allows you to focus on your
applications and business. Amazon RDS provides you with six familiar database engines to
choose from: Amazon Aurora, Oracle, Microsoft SQL Server, PostgreSQL, MySQL and
MariaDB.
In this lab environment, access to AWS services and service actions might be restricted to the
ones that are needed to complete the lab instructions. You might encounter errors if you attempt
to access other services or perform actions beyond the ones that are described in this lab.
2. To connect to the AWS Management Console, choose the AWS link in the upper-left
corner.
○ A new browser tab opens and connects you to the console.
Tip: If a new browser tab does not open, a banner or icon is usually at the top of
your browser with the message that your browser is preventing the site from
opening pop-up windows. Choose the banner or icon, and then choose Allow
pop-ups.
3. Arrange the AWS Management Console tab so that it displays along side these
instructions. Ideally, you will be able to see both browser tabs at the same time, to make it
easier to follow the lab steps.
Tip: The script that checks you works may only award points if you name resources and set
configurations as specified. In particular, values in these instructions that appear in This Format
should be entered exactly as documented (case-sensitive).
In this task, you will create a security group to allow your web server to access your RDS DB
instance. The security group will be used when you launch the database instance.
4. In the AWS Management Console, in the search box next to Services , search for and
select VPC.
6.
In this task, you will create a DB subnet group that is used to tell RDS which subnets can be used
for the database. Each DB subnet group requires subnets in at least two Availability Zones.
10. In the AWS Management Console, in the search box next to Services , search for and
select RDS.
14. Expand the list of values under Availability Zones and select the first two zones:
us-east-1a and us-east-1b.
15. Expand the list of values under Subnets and select the subnets associated with the CIDR
ranges 10.0.1.0/24 and 10.0.3.0/24.
These subnets should now be shown in the Subnets selected table.
16. Choose Create
You will use this DB subnet group when creating the database in the next task.
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database
(DB) instances, making them a natural fit for production database workloads. When you
provision a Multi-AZ DB instance, Amazon RDS automatically creates a primary DB instance
and synchronously replicates the data to a standby instance in a different Availability Zone (AZ).
26. Under Existing VPC security groups, from the dropdown list:
○ Choose DB Security Group.
○ Deselect default.
27. Under Monitoring expand Additional configuration.
○ Uncheck Enable Enhanced monitoring.
33. Scroll down to the Connectivity & security section and copy the Endpoint field.
It will look similar to: lab-db.xxxx.us-east-1.rds.amazonaws.com.
34. Paste the Endpoint value into a text editor. You will use it later in the lab.
lab-db.cremiigiuezw.us-east-1.rds.amazonaws.com
Port
3306
In this task, you will open a web application running on a web server that has been created for
you. You will configure it to use the database that you just created.
34.
35. To discover the WebServer IP address, choose on the AWS Details drop down menu
above these instructions. Copy the IP address value.
36. Open a new web browser tab, paste the WebServer IP address and press Enter.
The web application will be displayed, showing information about the EC2 instance.
39. A message will appear explaining that the application is running a command to copy
information to the database. After a few seconds the application will display an Address
Book.
The Address Book application is using the RDS database to store information.
40. Test the web application by adding, editing and removing contacts.
The data is being persisted to the database and is automatically replicating to the second
Availability Zone.
Submitting your work
39. To record your progress, choose Submit at the top of these instructions.
41. To find detailed feedback about your work, choose Submission Report.
Tip: For any checks where you did not receive full points, there are sometimes helpful
details provided in the submission report.
Lab Complete
Conclusion:- Thus we have successfully completed building DB Server and Interacting
With Your DB Using an App
Name : Rishit Ravichandran
PRN : 121A1088
Batch : D1
Experiment No.9
● Manage IAM Users and their access: You can create Users and assign them
individual security credentials (access keys, passwords, and multi-factor
authentication devices). You can manage permissions to control which
operations a User can perform.
● Manage IAM Roles and their permissions: An IAM Role is similar to a User, in
that it is an AWS identity with permission policies that determine what the identity
can and cannot do in AWS. However, instead of being uniquely associated with
one person, a Role is intended to be assumable by anyone who needs it.
● Manage federated users and their permissions: You can enable identity
federation to allow existing users in your enterprise to access the AWS
Management Console, to call AWS APIs and to access resources, without the
need to create an IAM User for each identity.
IAM (Identity and Access Management) in AWS is a crucial service that allows you to
manage access to AWS resources securely. It enables you to control who can access
your AWS resources (authentication) and what actions they can perform (authorization).
Here's an overview along with some key points and best practices:
Identity:
● IAM allows you to create and manage users, groups, and roles to
represent the people, services, and applications that interact with your
AWS resources.
● Users: Represent individual people and can be assigned security
credentials such as passwords or access keys.
● Groups: Collections of IAM users. Permissions can be assigned to groups
rather than individual users, which simplifies permissions management.
● Roles: AWS IAM roles are a way to delegate permissions to entities that
you trust. For example, you can create roles for applications running on EC2
instances, AWS Lambda functions, or for cross-account access. Access
Management:
● IAM provides fine-grained access control using policies. Policies are JSON
documents that define permissions.
● Policies can be attached to users, groups, or roles, and they specify the
actions allowed or denied on AWS resources.
● IAM policies follow the principle of least privilege, meaning users should
have only the permissions necessary to perform their tasks.
Security Features:
● Multi-Factor Authentication (MFA): Adds an extra layer of security to user
sign-ins and API calls. Requires users to present two or more forms of
identification (factors).
● Access Keys: IAM users can have access keys associated with their
account for programmatic access to AWS services.
● Password Policies: IAM allows you to enforce password policies such as
minimum length, complexity requirements, and rotation policies. Best
Practices:
● Use IAM Roles for EC2 Instances: Instead of storing access keys directly
on EC2 instances, use IAM roles to securely grant permissions to EC2
instances.
● Regularly review and update permissions: Regularly review IAM policies
and permissions to ensure they align with the principle of least privilege. ●
Enable MFA: Enable MFA for IAM users to add an extra layer of security to
account logins.
● Use IAM Conditions: Use IAM conditions to further refine access control
based on various factors such as IP address, time of day, etc. Audit and
Monitoring:
● IAM provides logging capabilities that allow you to monitor actions
performed with IAM entities.
● Use AWS CloudTrail to capture all API calls made by or on behalf of an
AWS account. This can help in audit and compliance efforts.
Integration with Other AWS Services:
● IAM integrates with many other AWS services such as Amazon S3, EC2,
RDS, etc., allowing you to control access to these services using IAM
policies.
4. In the search box to the right of Services, search for and choose IAM to open
the IAM console.
5. In the navigation pane on the left, choose Users.
The following IAM Users have been created for you:
○ user-1
○ user-2
○ user-3
13.Choose the plus (+) icon next to the AmazonEC2ReadOnlyAccess policy to view
the policy details.
Note: A policy defines what actions are allowed or denied for specific AWS
resources.
This policy is granting permission to List and Describe information about EC2,
Elastic Load Balancing, CloudWatch and Auto Scaling. This ability to view
resources, but not modify them, is ideal for assigning to a Support role.
The basic structure of the statements in an IAM Policy is:
○ Effect says whether to Allow or Deny the permissions.
○ Action specifies the API calls that can be made against an AWS Service
(eg cloudwatch:ListMetrics).
○ Resource defines the scope of entities covered by the policy rule (eg a
specific Amazon S3 bucket or Amazon EC2 instance, or * which means
any resource).
{
○ "Version": "2012-10-17",
○ "Statement": [
○{
○ "Effect": "Allow",
○ "Action": "ec2:Describe*",
○ "Resource": "*"
○ },
○{
○ "Effect": "Allow",
○ "Action": "elasticloadbalancing:Describe*",
○ "Resource": "*"
○ },
○{
○ "Effect": "Allow",
○ "Action": [
○ "cloudwatch:ListMetrics",
○ "cloudwatch:GetMetricStatistics",
○ "cloudwatch:Describe*"
○ ],
○ "Resource": "*"
○ },
○{
○ "Effect": "Allow",
○ "Action": "autoscaling:Describe*",
○ "Resource": "*"
○}
○]
○}
○
14.Choose the minus icon (-) to hide the policy details.
15.In the navigation pane on the left, choose User groups.
16.Choose the S3-Support group link and then choose the Permissions tab.
The S3-Support group has the AmazonS3ReadOnlyAccess policy attached.
"Version": "2012-10-17",
"Statement": [
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*",
"s3:Describe*",
"s3-object-lambda:Get*",
"s3-object-lambda:List*"
],
"Resource": "*"
"Version": "2012-10-17",
"Statement": [
"Action": [
"ec2:Describe*",
"ec2:StartInstances",
"ec2:StopInstances"
],
"Resource": [
"*"
],
"Effect": "Allow"
}
]
Business Scenario
For the remainder of this lab, you will work with these Users and Groups to enable
permissions supporting the following business scenario:
Your company is growing its use of Amazon Web Services, and is using many Amazon
EC2 instances and a great deal of Amazon S3 storage. You wish to give access to new
staff depending upon their job function:
User In Group Permissions
You can ignore any "not authorized" errors that appear during this task. They are
caused by your lab account having limited permissions and will not impact your ability to
complete the lab.
28.Using similar steps to the ones above, add user-2 to the EC2-Support group.
user-2 should now be part of the EC2-Support group.
29.Using similar steps to the ones above, add user-3 to the EC2-Admin group.
user-3 should now be part of the EC2-Admin group.
30.In the navigation pane on the left, choose User groups.
Each Group should now have a 1 in the Users column, indicating the number of
Users in each Group.
If you do not have a 1 beside each group, revisit the above instructions above to
ensure that each user is assigned to a User group, as shown in the table in the
Business Scenario section.
Task 3: Sign-In and Test Users
In this task, you will test the permissions of each IAM User.
32.Copy the Sign-in URL for IAM users in this account to a text editor.
https://058264095403.signin.aws.amazon.com/console
37.Paste the IAM users sign-in link into the address bar of your private browser
session and press Enter.
Next, you will sign-in as user-1, who has been hired as your Amazon S3 storage
support staff.
38.Sign-in with:
○ IAM user name: user-1
○ Password: Lab-Password1
39.In the search box to the right of Services, search for and choose S3 to open the
S3 console.
40.Choose the name of the bucket that exists in the account and browse the
contents.
Since your user is part of the S3-Support Group in IAM, they have permission to
view a list of Amazon S3 buckets and the contents.
Note: The bucket does not contain any objects.
Now, test whether they have access to Amazon EC2.
41.In the search box to the right of Services, search for and choose EC2 to open the EC2
console.
You will now sign-in as user-2, who has been hired as your Amazon EC2 support
person.
43.Sign user-1 out of the AWS Management Console by completing the following
actions:
○ At the top of the screen, choose user-1
○ Choose Sign Out
44.
45.Paste the IAM users sign-in link into your private browser tab's address bar and
press Enter.
Note: This link should be in your text editor.
46.Sign-in with:
○ IAM user name: user-2
○ Password: Lab-Password2
47.In the search box to the right of Services, search for and choose EC2 to open
the EC2 console.
52.In the search box to the right of Services, search for and choose S3 to open the
S3 console.
You will see the message You don't have permissions to list buckets because
user-2 does not have permission to access Amazon S3.
You will now sign-in as user-3, who has been hired as your Amazon EC2
administrator.
53.Sign user-2 out of the AWS Management Console by completing the following
actions:
○ At the top of the screen, choose user-2
○ Choose Sign Out
54. 55.Paste the
IAM users sign-in link into your private window and press Enter.
56.Paste the sign-in link into the address bar of your private web browser tab again.
If it is not in your clipboard, retrieve it from the text editor where you stored it
earlier.
57.Sign-in with:
○ IAM user name: user-3
○ Password: Lab-Password3
58.In the search box to the right of Services, search for and choose EC2 to open
the EC2 console.
Theory:
1. Containerization
Containerization is OS-based virtualization that creates multiple virtual units in the user space,
known as Containers. Containers share the same host kernel but are isolated from each other
through private namespaces and resource control mechanisms at the OS level. Container-based
Virtualization provides a different level of abstraction in terms of virtualization and isolation
when compared with hypervisors. Hypervisors use a lot of hardware which results in overhead in
terms of virtualizing hardware and virtual device drivers.
A full operating system (e.g -Linux, Windows) runs on top of this virtualized hardware in each
virtual machine instance. But in contrast, containers implement isolation of processes at the
operating system level, thus avoiding such overhead. These containers run on top of the same
shared operating system kernel of the underlying host machine and one or more processes can be
run within each container.
In containers you don’t have to pre-allocate any RAM, it is allocated dynamically during the
creation of containers while in VMs you need to first pre-allocate the memory and then create the
virtual machine. Containerization has better resource utilization compared to VMs and a short
boot-up process. It is the next evolution in virtualization.
Containers can run virtually anywhere, greatly easy development and deployment: on Linux,
Windows, and Mac operating systems; on virtual machines or bare metal, on a developer’s
machine or in data centers on-premises; and of course, in the public cloud. Containers virtualize
CPU, memory, storage, and network resources at the OS level, providing developers with a
sandboxed view of the OS logically isolated from other applications.
Docker is the containerization platform that is used to package your application and all its
dependencies together in the form of containers to make sure that your application works
seamlessly in any environment which can be developed or tested or in production. Docker is a tool
designed to make it easier to create, deploy, and run applications by using containers.
Docker is the world’s leading software container platform. It was launched in 2013 by a company
called Dotcloud, Inc which was later renamed Docker, Inc. It is written in the Go language. It has
been just six years since Docker was launched yet communities have already shifted to it from
VMs. Docker is designed to benefit both developers and system administrators by making it a part
of many DevOps toolchains. Developers can write code without worrying about the testing and
production environment.
3. Docker Architecture
Docker architecture consists of Docker client, Docker Daemon running on Docker Host, and
Docker Hub repository. Docker has client-server architecture in which the client communicates
with the Docker Daemon running on the Docker Host using a combination of REST APIs, Socket
IO, and TCP. If we must build the Docker image, then we use the client to execute the build
command to Docker Daemon. Then Docker Daemon builds an image based on given inputs and
saves it into the Docker registry. If you don’t want to create an image, then just execute the pull
command from the client and then Docker Daemon will pull the image from the Docker Hub
finally if we want to run the image then execute the run command from the client which will create
the container.
4. Configuring MySQL in Docker
5. Configuring Python in Docker
Pipfile
Python app.py file
Dockerfile
Docker file
Python file
Experiment 11
What is kubernetes?
Kubernetes automates operational tasks of container management and includes built-in commands for deploying
applications, rolling out changes to your applications, scaling your applications up and down to fit changing
needs, monitoring your applications, and more—making it easier to manage applications.
Components of kubernetes?
• Kubernetes Control Plane. The Kubernetes control plane is the set of tools that manages clusters and the
workloads running on them.
• Kubernetes Nodes.
• Optional Kubernetes Extensions.
• The API Server.
• The Kubernetes Scheduler.
• The Kubernetes Controller.
• Etcd.
• Kubernetes Pods.
What is minikube ?
Minikube is a tool that sets up a Kubernetes environment on a local PC or laptop. It’s technically a Kubernetes
distribution, but because it addresses a different type of use case than most other distributions (like Rancher,
OpenShift, and EKS), it’s more common to hear folks refer to it as a tool rather than a distribution.
PROCEDURE:
1. Installing Kubectl
1. Using Command Prompt to manually compare CertUtil's output to the checksum file
downloaded:
$($(CertUtil -hashfile .\kubectl.exe SHA256)[1] -replace " ", "") -eq $(type .
\kubectl.exe.sha256)
2. Installing minikube
$oldPath = [Environment]::GetEnvironmentVariable('Path',
[EnvironmentVariableTarget]::Machine)
if ($oldPath.Split(';') -inotcontains 'C:\minikube'){ `
[Environment]::SetEnvironmentVariable('Path', $('{0};C:\minikube' -f $oldPath),
[EnvironmentVariableTarget]::Machine) `
}
3. Starting our Cluster
1. Initializing minikube
minikube start
minikube status
3. minikube get node
yamlpods.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80