CCL Merged PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 111

INTRODUCTION TO CLOUD COMPUTING

Name : Rishit Ravichandran


PRN : 121A1088
Sem : 6
Branch : Computer Engineering

Cloud computing is a model that provides ubiquitous, convenient, on-demand network


access to a shared pool of configurable computing resources. This model is
characterized by five essential features, three service models, and four deployment
models.

Essential Characteristics:

On-demand self-service: Consumers can provision computing resources automatically


without human interaction with the service provider.

1)Broad network access: Capabilities are available over the network and
accessible through standard mechanisms, supporting diverse client platforms.

2)Resource pooling: Providers pool computing resources in a multi-tenant model,


dynamically assigning and reallocating physical and virtual resources based on
consumer demand.

3)Rapid elasticity: Capabilities can be rapidly provisioned and released, scaling


outward and inward based on demand. The illusion of unlimited and instant
availability is presented to consumers.

4)Measured service: Cloud systems automatically control and optimize resource


use through metering capabilities, typically on a pay-per-use basis. Resource
usage is monitored, controlled, audited, and reported for transparency.
Service Models:

1)Software as a Service (SaaS): Consumers use provider applications on a cloud


infrastructure, accessible via thin client interfaces like web browsers. Consumers
do not manage underlying infrastructure.

2)Platform as a Service (PaaS): Consumers deploy consumer-created or


acquired applications using provider-supported programming languages,
libraries, services, and tools. Consumers do not manage underlying infrastructure
but have control over deployed applications.

3)Infrastructure as a Service (IaaS): Consumers provision fundamental


computing resources and deploy arbitrary software. Consumers have control
over operating systems, storage, and deployed applications but not the
underlying cloud infrastructure.
Deployment Models:

1)Private cloud: Cloud infrastructure is exclusively provisioned for a single


organization, owned and operated by the organization or a third party, on or off
premises.

2)Community cloud: Cloud infrastructure is provisioned for exclusive use by a


specific community of consumers with shared concerns. It may be owned,
managed, and operated by one or more organizations, on or off premises.

3)Public cloud: Cloud infrastructure is provisioned for open use by the general
public, owned, managed, and operated by a business, academic, or government
organization, on the premises of the cloud provider.

4)Hybrid cloud: A composition of two or more distinct cloud infrastructures


(private, community, or public) bound together by standardized or proprietary
technology for data and application portability.
Cloud Computing Architecture :
The cloud architecture is divided into 2 parts i.e.

1. Frontend

2. Backend

Architecture of Cloud Computing

Architecture of cloud computing is the combination of both SOA (Service


Oriented Architecture) and EDA (Event Driven Architecture). Client
infrastructure, application, service, runtime cloud, storage, infrastructure,
management and security all these are the components of cloud computing
architecture.
1. Frontend :
Frontend of the cloud architecture refers to the client side of cloud computing
system. Means it contains all the user interfaces and applications which are
used by the client to access the cloud computing services/resources. For
example, use of a web browser to access the cloud platform.

● Client Infrastructure – Client Infrastructure is a part of the frontend

component. It contains the applications and user interfaces which are

required to access the cloud platform.

● In other words, it provides a GUI( Graphical User Interface ) to interact

with the cloud.

2. Backend :
Backend refers to the cloud itself which is used by the service provider. It
contains the resources as well as manages the resources and provides security
mechanisms. Along with this, it includes huge storage, virtual applications,
virtual machines, traffic control mechanisms, deployment models, etc.

1. Application –
Application in backend refers to a software or platform to which a
client accesses. Means it provides the service in the backend as per the
client requirement.
2. Service –
Service in backend refers to the major three types of cloud based
services like SaaS, PaaS and IaaS. Also manages which type of service
the user accesses.
3. Runtime Cloud-
Runtime cloud in backend provides the execution and Runtime
platform/environment to the Virtual machine.
4. Storage –
Storage in the backend provides flexible and scalable storage service
and management of stored data.
5. Infrastructure –
Cloud Infrastructure in backend refers to the hardware and software
components of cloud like it includes servers, storage, network devices,
virtualization software etc.
6. Management –
Management in backend refers to management of backend
components like application, service, runtime cloud, storage,
infrastructure, and other security mechanisms etc.
7. Security –
Security in the backend refers to implementation of different security
mechanisms in the backend for secure cloud resources, systems, files,
and infrastructure to end-users.
8. Internet –
Internet connection acts as the medium or a bridge between frontend
and backend and establishes the interaction and communication
between frontend and backend.
9. Database– Database in backend refers to a database for storing
structured data, such as SQL and NOSQL databases. Examples of
Database services include Amazon RDS, Microsoft Azure SQL
database and Google CLoud SQL.
10. Networking– Networking in backend services that provide
networking infrastructure for applications in the cloud, such as load
balancing, DNS and virtual private networks.
11. Analytics– Analytics in backend service that provides analytics
capabilities for data in the cloud, such as warehousing, business
intelligence and machine learning.

Benefits of Cloud Computing Architecture :


● Makes the overall cloud computing system simpler.

● Improves data processing requirements.

● Helps in providing high security.

● Makes it more modularized.

● Results in better disaster recovery.

● Gives good user accessibility.

● Reduces IT operating costs.

● Provides high level reliability.

● Scalability.

Conclusion:-

Cloud architecture establishes a dynamic and secure framework, enabling


organizations to efficiently deploy, scale, and manage resources in the ever-evolving
landscape of cloud computing.
Experiment No.2
Name : Rishit Ravichandran

PRN : 121A1088

Batch : D-1

Theory:-

Virtualization is the process of creating a virtual (rather than actual) version of


something, including but not limited to a virtual machine, an operating system, a storage
device, or a network resource. This virtual version behaves as if it were a separate
physical entity, allowing multiple virtual instances to coexist and operate independently
on the same physical hardware.

VirtualBox is a popular open-source virtualization software that allows you to create and
manage virtual machines on your computer. Here are the basic steps to create a virtual
machine using VirtualBox:

Download and Install VirtualBox:


● Go to the official VirtualBox website (https://www.virtualbox.org/) and
download the appropriate installer for your operating system.
● Run the installer and follow the on-screen instructions to install VirtualBox
on your computer.
Download an Operating System Image:
● Obtain an ISO image of the operating system you want to install on your
virtual machine. You can download ISO images from the official website
of the respective operating system (e.g., Ubuntu, Windows).
Create a New Virtual Machine:
● Open VirtualBox and click on the "New" button in the toolbar.
● Enter a name for your virtual machine and select the type and version of
the operating system you'll be installing.
● Allocate memory (RAM) to your virtual machine. Ensure that you allocate
an appropriate amount based on the requirements of the guest operating
system.
● Create a virtual hard disk for your virtual machine. You can choose to
create a new virtual hard disk or use an existing one.
Configure Virtual Machine Settings:
● Select your newly created virtual machine from the VirtualBox Manager
window.
● Click on the "Settings" button in the toolbar.
● In the Settings window, you can configure various options such as
processor settings, display settings, storage settings, network
settings, etc., based on your requirements.
Attach the Operating System ISO Image:
● In the Settings window of your virtual machine, go to the "Storage" tab. ●
Under the "Controller: IDE" section (or whichever storage controller you
prefer), click on the empty disk icon next to "Controller: IDE" to add a new
optical drive.
● Select "Choose a disk file" and navigate to the location where you saved
the ISO image of your operating system. Select the ISO image file.
Install the Guest Operating System:
● Start your virtual machine by clicking on the "Start" button in the
VirtualBox Manager window.
● The virtual machine will boot from the attached ISO image, and you can
follow the on-screen instructions to install the guest operating system just
like you would on a physical machine.
Install VirtualBox Guest Additions (Optional):
● After installing the guest operating system, you may want to install
VirtualBox Guest Additions to enable additional features such as
seamless mode, shared folders, and better video support. You can find the
option to install Guest Additions in the Devices menu of your virtual
machine's window.

Virtual machines (VMs)


Virtual machines (VMs) are virtual environments that simulate a physical compute in
software form. They normally comprise several files containing the VM’s
configuration, the storage for the virtual hard drive, and some snapshots of the VM
that preserve its state at a particular point in time.
Screenshots:-

1. Writing a C Code in Text Editor.

2. Compiling and Executing a C Code.


C
o
n
c
l
u
s
i
o
n
:
-

T
h
e

p
r
i
n
c
iple of virtualization was explored, leading to the creation of a virtual instance running
Ubuntu. Within this virtual environment, code execution was conducted.
Name : Rishit Ravichandran
PRN : 121A1088
Batch : D1
Experiment No. 3

Aim :- To study and implement Hosted Virtualization using Virtual Box

Theory:-

Virtualization is the process of creating a virtual (rather than actual) version of


something, including but not limited to a virtual machine, an operating system, a storage
device, or a network resource. This virtual version behaves as if it were a separate
physical entity, allowing multiple virtual instances to coexist and operate independently
on the same physical hardware.

VirtualBox is a popular open-source virtualization software that allows you to create and
manage virtual machines on your computer. Here are the basic steps to create a virtual
machine using VirtualBox:

Download and Install VirtualBox:


● Go to the official VirtualBox website (https://www.virtualbox.org/) and
download the appropriate installer for your operating system.
● Run the installer and follow the on-screen instructions to install VirtualBox
on your computer.
Download an Operating System Image:
● Obtain an ISO image of the operating system you want to install on your
virtual machine. You can download ISO images from the official website
of the respective operating system (e.g., Ubuntu, Windows).
Create a New Virtual Machine:
● Open VirtualBox and click on the "New" button in the toolbar.
● Enter a name for your virtual machine and select the type and version of
the operating system you'll be installing.
● Allocate memory (RAM) to your virtual machine. Ensure that you allocate
an appropriate amount based on the requirements of the guest operating
system.
● Create a virtual hard disk for your virtual machine. You can choose to
create a new virtual hard disk or use an existing one.
Configure Virtual Machine Settings:
● Select your newly created virtual machine from the VirtualBox Manager
window.
● Click on the "Settings" button in the toolbar.
● In the Settings window, you can configure various options such as
processor settings, display settings, storage settings, network
settings, etc., based on your requirements.
Attach the Operating System ISO Image:
● In the Settings window of your virtual machine, go to the "Storage" tab. ●
Under the "Controller: IDE" section (or whichever storage controller you
prefer), click on the empty disk icon next to "Controller: IDE" to add a new
optical drive.
● Select "Choose a disk file" and navigate to the location where you saved
the ISO image of your operating system. Select the ISO image file.
Install the Guest Operating System:
● Start your virtual machine by clicking on the "Start" button in the
VirtualBox Manager window.
● The virtual machine will boot from the attached ISO image, and you can
follow the on-screen instructions to install the guest operating system just
like you would on a physical machine.
Install VirtualBox Guest Additions (Optional):
● After installing the guest operating system, you may want to install
VirtualBox Guest Additions to enable additional features such as
seamless mode, shared folders, and better video support. You can find the
option to install Guest Additions in the Devices menu of your virtual
machine's window.

Virtual machines (VMs)


Virtual machines (VMs) are virtual environments that simulate a physical compute in
software form. They normally comprise several files containing the VM’s
configuration, the storage for the virtual hard drive, and some snapshots of the VM
that preserve its state at a particular point in time.
Screenshots:-

1. Writing a C Code in Text Editor.

2. Compiling and Executing a C Code.


Conclusion:- The principle of virtualization was explored, leading to the creation of a virtual
instance running Ubuntu. Within this virtual environment, code execution was conducted.
Introduction to Amazon EC2
Name : Rishit Ravichandran

PRN : 121A1088

Batch : D-1

Lab overview and objectives

This lab provides you with a basic overview of launching, resizing, managing, and
monitoring an Amazon EC2 instance.

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides
resizable compute capacity in the cloud. It is designed to make web-scale cloud
computing easier for developers.

Amazon EC2's simple web service interface allows you to obtain and configure capacity
with minimal friction. It provides you with complete control of your computing resources
and lets you run on Amazon's proven computing environment. Amazon EC2 reduces
the time required to obtain and boot new server instances to minutes, allowing you to
quickly scale capacity, both up and down, as your computing requirements change.

Amazon EC2 changes the economics of computing by allowing you to pay only for
capacity that you actually use. Amazon EC2 provides developers the tools to build
failure resilient applications and isolate themselves from common failure scenarios.

After completing this lab, you should be able to do the following:

● Launch a web server with termination protection enabled


● Monitor Your EC2 instance
● Modify the security group that your web server is using to allow HTTP access
● Resize your Amazon EC2 instance to scale and enable stop protection ●
Explore EC2 limits
● Test stop protection
● Stop your EC2 instance
Task 1: Launch Your Amazon EC2 Instance
In this task, you will launch an Amazon EC2 instance with termination protection and stop
protection. Termination protection prevents you from accidentally terminating the EC2 instance
and stop protection prevents you from accidentally stopping the EC2 instance. You will also
specify a User Data script when you launch the instance that will deploy a simple web server.

4. In the AWS Management Console choose Services, choose Compute and then
choose EC2.
Note: Verify that your EC2 console is currently managing resources in the N. Virginia
(us-east-1) region. You can verify this by looking at the drop down menu at the top of
the screen, to the left of your username. If it does not already indicate N. Virginia,
choose the N. Virginia region from the region menu before proceeding to the next step.

5. Choose the Launch instance menu and select Launch instance.

Step 1: Name and tags


6. Give the instance the name Web Server.
The Name you give this instance will be stored as a tag. Tags enable you to
categorize your AWS resources in different ways, for example, by purpose,
owner, or environment. This is useful when you have many resources of the
same type — you can quickly identify a specific resource based on the tags you
have assigned to it. Each tag consists of a Key and a Value, both of which you
define. You can define multiple tags to associate with the instance if you want to.
In this case, the tag that will be created will consist of a key called Name with a
value of Web Server

Step 2: Application and OS Images (Amazon


Machine Image)
7. In the list of available Quick Start AMIs, keep the default Amazon Linux AMI
selected.

8. Also keep the default Amazon Linux 2023 AMI selected.


An Amazon Machine Image (AMI) provides the information required to launch
an instance, which is a virtual server in the cloud. An AMI includes: ○ A template
for the root volume for the instance (for example, an operating system or an
application server with applications)
○ Launch permissions that control which AWS accounts can use the AMI to
launch instances
○ A block device mapping that specifies the volumes to attach to the
instance when it is launched
9. The Quick Start list contains the most commonly-used AMIs. You can also create
your own AMI or select an AMI from the AWS Marketplace, an online store where
you can sell or buy software that runs on AWS.

Step 3: Instance type


9. In the Instance type panel, keep the default t2.micro selected.
Amazon EC2 provides a wide selection of instance types optimized to fit
different use cases. Instance types comprise varying combinations of CPU,
memory, storage, and networking capacity and give you the flexibility to choose
the appropriate mix of resources for your applications. Each instance type
includes one or more instance sizes, allowing you to scale your resources to the
requirements of your target workload.
The t2.micro instance type has 1 virtual CPU and 1 GiB of memory.
Note: You may be restricted from using other instance types in this
lab.
Step 4: Key pair (login)
10.For Key pair name - required, choose vockey.
Amazon EC2 uses public–key cryptography to encrypt and decrypt login
information. To ensure you will be able to log in to the guest OS of the instance
you create, you identify an existing key pair or create a new key pair when
launching the instance. Amazon EC2 then installs the key on the guest OS when
the instance is launched. That way, when you attempt to login to the instance and
you provide the private key, you will be authorized to connect to the instance.
Note: In this lab you will not actually use the key pair you have specified to log
into your instance.

Step 5: Network settings


11. Next to Network settings, choose Edit.

12.For VPC, select Lab VPC.


The Lab VPC was created using an AWS CloudFormation template during the
setup process of your lab. This VPC includes two public subnets in two different
Availability Zones.
Note: Keep the default subnet PublicSubnet1. This is the subnet in which the
instance will run. Notice also that by default, the instance will be assigned a
public IP address.

13.Under Firewall (security groups), choose Create security group and


configure:
○ Security group name: Web Server security group
○ Description: Security group for my web server
A security group acts as a virtual firewall that controls the traffic for one or
more instances. When you launch an instance, you associate one or more
security groups with the instance. You add rules to each security group
that allow traffic to or from its associated instances. You can modify the
rules for a security group at any time; the new rules are automatically
applied to all instances that are associated with the security group.
○ Under Inbound security group rules, notice that one rule exists.
Remove this rule.

Step 6: Configure storage


14.In the Configure storage section, keep the default settings.
Amazon EC2 stores data on a network-attached virtual disk called Elastic Block
Store.
You will launch the Amazon EC2 instance using a default 8 GiB disk volume. This
will be your root volume (also known as a 'boot' volume).

Step 7: Advanced details


15.Expand Advanced details.

16.For Termination protection, select Enable.


When an Amazon EC2 instance is no longer required, it can be terminated,
which means that the instance is deleted and its resources are released. A
terminated instance cannot be accessed again and the data that was on it cannot
be recovered. If you want to prevent the instance from being accidentally
terminated, you can enable termination protection for the instance, which
prevents it from being terminated as long as this setting remains enabled.

15.Scroll to the bottom of the page and then copy and paste the code shown below
into the User data box:
16.#!/bin/bash
dnf install -y httpd
systemctl enable httpd
systemctl start httpd
echo '<html><h1>Hello From Your Web Server!</h1></html>' >
/var/www/html/index.html
17. When you launch an instance, you can pass user data to the instance that can
be used to perform automated installation and configuration tasks after the
instance starts.
Your instance is running Amazon Linux 2023. The shell script you have specified
will run as the root guest OS user when the instance starts. The script will: ○
Install an Apache web server (httpd)
○ Configure the web server to automatically start on boot
○ Run the Web server once it has finished installing
○ Create a simple web page
Step 8: Launch the instance
18.At the bottom of the Summary panel choose Launch instance
You will see a Success message.
19.Choose View all instances
a. In the Instances list, select Web Server.
b. Review the information displayed in the Details tab. It includes information
about the instance type, security settings and network settings.
The instance is assigned a Public IPv4 DNS that you can use to contact
the instance from the Internet.
To view more information, drag the window divider upwards.
At first, the instance will appear in a Pending state, which means it is
being launched. It will then change to Initializing, and finally to Running. 20.
21.Wait for your instance to display the following:
a. Instance State: Running
b. Status Checks: 2/2 checks passed

Task 2: Monitor Your Instance


Monitoring is an important part of maintaining the reliability, availability, and performance
of your Amazon Elastic Compute Cloud (Amazon EC2) instances and your AWS
solutions.

21.Choose the Status checks tab.


With instance status monitoring, you can quickly determine whether Amazon
EC2 has detected any problems that might prevent your instances from
running
applications. Amazon EC2 performs automated checks on every running EC2
instance to identify hardware and software issues.
Notice that both the System reachability and Instance reachability checks
have passed.

22.Choose the Monitoring tab.


This tab displays Amazon CloudWatch metrics for your instance. Currently, there
are not many metrics to display because the instance was recently launched.
You can choose the three dots icon in any graph and select Enlarge to see an
expanded view of the chosen metric.
Amazon EC2 sends metrics to Amazon CloudWatch for your EC2 instances.
Basic (five-minute) monitoring is enabled by default. You can also enable detailed
(one-minute) monitoring.
21.In the Actions menu towards the top of the console, select Monitor and
troubleshoot Get system log.
The System Log displays the console output of the instance, which is a valuable
tool for problem diagnosis. It is especially useful for troubleshooting kernel
problems and service configuration issues that could cause an instance to
terminate or become unreachable before its SSH daemon can be started. If you
do not see a system log, wait a few minutes and then try again.

22.Scroll through the output and note that the HTTP package was installed from the
user data that you added when you created the instance.

Task 3: Update Your Security


Group and Access the Web
Server
When you launched the EC2 instance, you provided a script that installed a web server
and created a simple web page. In this task, you will access content from the web
server.

28.Ensure Web Server is still selected. Choose the Details tab.

29.Copy the Public IPv4 address of your instance to your clipboard.

30.Open a new tab in your web browser, paste the IP address you just copied, then
press Enter.
Question: Are you able to access your web server? Why not?
You are not currently able to access your web server because the security group
is not permitting inbound traffic on port 80, which is used for HTTP web requests.
This is a demonstration of using a security group as a firewall to restrict the
network traffic that is allowed in and out of an instance.
To correct this, you will now update the security group to permit web traffic on
port 80.

31.Keep the browser tab open, but return to the EC2 Console tab.

32.In the left navigation pane, choose Security Groups.

33.Select Web Server security group.

34.Choose the Inbound rules tab.


The security group currently has no inbound rules.

35.Choose Edit inbound rules, select Add rule and then configure:
○ Type: HTTP
○ Source: Anywhere-IPv4
○ Choose Save rules

36.Return to the web server tab that you previously opened and refresh the page.
You should see the message Hello From Your Web Server!

Congratulations! You have successfully modified your security group to permit


HTTP traffic into your Amazon EC2 Instance.
Task 4: Resize Your Instance:
Instance Type and EBS Volume
As your needs change, you might find that your instance is over-utilized (too small) or
under-utilized (too large). If so, you can change the instance type. For example, if a
t2.micro instance is too small for its workload, you can change it to an m5.medium
instance. Similarly, you can change the size of a disk.

Stop Your Instance


Before you can resize an instance, you must stop it.

When you stop an instance, it is shut down. There is no runtime charge for a stopped
EC2 instance, but the storage charge for attached Amazon EBS volumes remains.

37.On the EC2 Management Console, in the left navigation pane, choose
Instances and then select the Web Server instance.

38.In the Instance state menu, select Stop instance.


39.Choose Stop
Your instance will perform a normal shutdown and then will stop running.

40.Wait for the Instance state to display: Stopped.

Change The Instance Type and enable stop


protection
41.Select the Web Server instance, then in the Actions menu, select Instance
settings Change instance type, then configure:
○ Instance Type: t2.small
○ Choose Apply
When the instance is started again it will run as a t2.small, which has twice
as much memory as a t2.micro instance. NOTE: You may be restricted
from using other instance types in this lab.
42.
43.Select the Web Server instance, then in the Actions menu, select Instance
settings Change stop protection. Select Enable and then Save the change.
When you stop an instance, the instance shuts down. When you later start the
instance, it is typically migrated to a new underlying host computer and assigned
a new public IPv4 address. An instance retains its assigned private IPv4 address.
When you stop an instance, it is not deleted. Any EBS volumes and the data on
those volumes are retained.

Resize the EBS Volume


43.With the Web Server instance still selected, choose the Storage tab, select the
name of the Volume ID, then select the checkbox next to the volume that
displays.

44.In the Actions menu, select Modify volume.


The disk volume currently has a size of 8 GiB. You will now increase the size of
this disk.

45.Change the size to: 10 NOTE: You may be restricted from creating Amazon EBS
volumes larger than 10 GB in this lab.

46.Choose Modify

47.Choose Modify again to confirm and increase the size of the volume.

Start the Resized Instance


You will now start the instance again, which will now have more memory and more disk
space.

48.In left navigation pane, choose Instances.

49.Select the Web Server instance.

50.In the Instance state menu, select Start instance.


Congratulations! You have successfully resized your Amazon EC2 Instance. In
this task you changed your instance type from t2.micro to t2.small. You also
modified your root disk volume from 8 GiB to 10 GiB.
Task 5: Explore EC2 Limits
Amazon EC2 provides different resources that you can use. These resources include
images, instances, volumes, and snapshots. When you create an AWS account, there
are default limits on these resources on a per-region basis.

51.In the AWS Management Console, in the search box next to Services, search for
and choose Service Quotas

52.Choose AWS services from the navigation menu and then in the AWS services
Find services search bar, search for ec2 and choose Amazon Elastic
Compute Cloud (Amazon EC2).

53.In the Find quotas search bar, search for running on-demand, but do not make
a selection. Instead, observe the filtered list of service quotas that match the
criteria.
Notice that there are limits on the number and types of instances that can run in
a region. For example, there is a limit on the number of Running On-Demand
Standard... instances that you can launch in this region. When launching
instances, the request must not cause your usage to exceed the instance limits
currently defined in that region.
If you are the AWS account owner, you can request an increase for many
of these limits.

Task 6: Test Stop Protection


You can stop your instance when you do not need to access but you would still like to
retain it. In this task, you will learn how to use stop protection.

54.In the AWS Management Console, in the search box next to Services, search for
and choose EC2 to return to the EC2 console.
55.In left navigation pane, choose Instances.

56.Select the Web Server instance and in the Instance state menu, select Stop
instance.

57.Then choose Stop


Note that there is a message that says: Failed to stop the instance i-1234567xxx.
The instance 'i-1234567xxx' may not be stopped. Modify its 'disableApiStop'
instance attribute and try again.
This shows that the stop protection that you enabled earlier in this lab is now
providing a safeguard to prevent the accidental stopping of an instance. If
you
really want to stop the instance, you will need to disable the stop protection.

58.In the Actions menu, select Instance settings Change stop protection.

59.Remove the check next to Enable.

60.Choose Save
You can now stop the instance.

61.Select the Web Server instance again and in the Instance state menu, select
Stop instance.

62.Choose Stop
Congratulations! You have successfully tested stop protection and stopped
your instance.

Submitting your work


Name : Rishit Ravichandran
PRN :121A1088
Batch : D-1

Experiment No.5

Task 1: Access the Elastic Beanstalk environment

4. In the console, in the search box to the right of to *Services*, search for and
choose *Elastic Beanstalk*.
A page titled Environments should open, and it should show a table that lists
the details for an existing Elastic Beanstalk application.
Note: If the status in the Health column is not Ok, it has not finished starting yet.
Wait a few moments, and it should change to Ok.

5. Under the Environment name column, choose the name of the environment.
The Dashboard page for your Elastic Beanstalk environment opens.

6. Notice that the page shows that the health of your application is Ok.
The Elastic Beanstalk environment is ready to host an application. However, it
does not yet have running code.

7. Test access to the environment.


○ Near the top of the page, choose the Domain link (the URL ends in
elasticbeanstalk.com).
When you choose the URL, a new browser tab opens. However, you
should see that it displays an HTTP Status 404 - Not Found message.
This behavior is expected because this application server doesn't have an
application running on it yet.


○ Return to the Elastic Beanstalk console.
In the next step, you will deploy code in your Elastic Beanstalk
environment.

Task 2: Deploy a sample application to Elastic Beanstalk


To download a sample application, choose this link:

8. https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/samples/tomcat.zip

9. Back in the Elastic Beanstalk Dashboard, choose Upload and Deploy.

10.Choose Choose File, then navigate to and open the tomcat.zip file that you just
downloaded.

11. Choose Deploy.


It will take a minute or two for Elastic Beanstalk to update your environment and
deploy the application.
12.After the deployment is complete, choose the Domain URL link (or, if you still
have the browser tab that displayed the 404 status, refresh that page). The web
application that you deployed displays.

Congratulations, you have successfully deployed an application on Elastic


Beanstalk!
13.Back in the Elastic Beanstalk console, choose Configuration in the left pane.
Notice the details here.
For example, in the Instance traffic and scaling panel, it indicates the EC2
Security groups, minimum and maximum instances, and instance type details of
the Amazon Elastic Compute Cloud (Amazon EC2) instances that are hosting
your web application.

14.In the Networking, database, and tags panel, no configuration details display,
because the environment does not include a database.
15.In the Networking, database, and tags row, choose Edit.
Note that you could easily enable a database to this environment if you wanted
to: you only need to set a few basic configurations and choose Apply. (However,
for the purposes of this activity, you do not need to add a database.)
○ Choose Cancel at the bottom of the screen.
16.
17.In the left panel under Environment, choose Monitoring.
Browse through the charts to see the kinds of information that are available to
you.

Task 3: Explore the AWS resources that support your


application

17.In the console, in the search box to the right of to *Services*, search for and
choose EC2

18.Choose Instances.
Note that two instances that support your web application are running (they both
contain samp in their names).

19.If you want to continue exploring the Amazon EC2 service resources that were
created by Elastic Beanstalk, feel free to explore them. You will find:
○ A security group with port 80 open
○ A load balancer that both instances belong to
○ An Auto Scaling group that runs from two to six instances, depending on
the network load
20.Though Elastic Beanstalk created these resources for you, you still have access
to them.
17.In the console, in the search box to the right of to *Services*, search for and
choose EC2

18.Choose Instances.
Note that two instances that support your web application are running (they both
contain samp in their names).
19.If you want to continue exploring the Amazon EC2 service resources that were
created by Elastic Beanstalk, feel free to explore them. You will find: ○ A
security group with port 80 open
○ A load balancer that both instances belong to
○ An Auto Scaling group that runs from two to six instances, depending on
the network load
20.Though Elastic Beanstalk created these resources for you, you still have access
to them.

Conclusion : Thus, we have successfully implemented the given task.


Lab 4: Working with EBS
Name : Rishit Ravichandran
PRN : 121A1088
Batch : D1

Lab Overview

This lab focuses on Amazon Elastic Block Store (Amazon EBS), a key underlying
storage mechanism for Amazon EC2 instances. In this lab, you will learn how to create
anAmazonEBSvolume,attachittoaninstance,applyafilesystemtothevolume,and then take
a snapshot backup.

Topics covered

By the end of this lab, you will be able to:

● Create an Amazon EBS volume


● Attach and mount your volume to an EC2 instance
● Create a snapshot of your volume
● Create a new volume from your snapshot
● Attach and mount the new volume to your EC2 instance

Duration

This lab will require approximately 30 minutes to complete.


AWS service restrictions

In this lab environment, access to AWS services and service actions might be restricted
totheonesthatareneededtocompletethelabinstructions.Youmightencountererrors if you
attempt to access other services or perform actions beyond the ones that are described
in this lab.

What is Amazon Elastic Block Store?


AmazonElasticBlockStore(AmazonEBS)offerspersistentstorageforAmazonEC2
instances. Amazon EBS volumes are network-attached and persist independently from
the life of an instance. Amazon EBS volumes are highly available, highly reliable
volumes that can be leveraged as an Amazon EC2 instances boot partition or attached
to a running Amazon EC2 instance as a standard block device.

When used as a boot partition, Amazon EC2 instances can be stopped and
subsequentlyrestarted,enablingyoutopayonlyforthestorageresourcesusedwhile
maintaining your instance's state. Amazon EBS volumes offer greatly improved
durabilityoverlocalAmazonEC2instancestoresbecauseAmazonEBSvolumesare
automatically replicated on the backend (in a single Availability Zone).

For those wanting even more durability, Amazon EBS provides the ability to create
point-in-time consistent snapshots of your volumes that are then stored in Amazon
Simple Storage Service (Amazon S3) and automatically replicated across multiple
AvailabilityZones.ThesesnapshotscanbeusedasthestartingpointfornewAmazon EBS
volumes and can protect your data for long-term durability. You can also easily share
these snapshots with co-workers and other AWS developers.

ThislabguideexplainsbasicconceptsofAmazonEBSinastep-by-stepfashion. However, it
can only give a brief overview of Amazon EBS concepts. For further information, see the
Amazon EBS documentation.

AmazonEBSVolumeFeatures
Amazon EBS volumes deliver the following features:
● Persistentstorage:VolumelifetimeisindependentofanyparticularAmazon EC2
instance.
● Generalpurpose:AmazonEBSvolumesareraw,unformattedblockdevices that
can be used from any operating system.
● Highperformance:AmazonEBSvolumesareequaltoorbetterthanlocal
Amazon EC2 drives.
● Highreliability:AmazonEBSvolumeshavebuilt-inredundancywithinan
Availability Zone.
● Designedforresiliency:TheAFR(AnnualFailureRate)ofAmazonEBSis
between 0.1% and 1%.
● Variablesize:Volumesizesrangefrom1GBto16TB.
● Easytouse:AmazonEBSvolumescanbeeasilycreated,attached,backedup,
restored, and deleted.

Accessing the AWS


ManagementConsole
1. At the top of these instructions, choose Start Lab.
○ The lab session starts.
○ Atimerdisplaysatthetopofthepageandshowsthetimeremainingin the
session.
Tip:Torefreshthesessionlengthatanytime,chooseStartLabagain before the
timer reaches 0:00.
○ Beforeyoucontinue,waituntilthecircleicontotherightoftheAWSlinkin the
upper-left corner turns green.

2. ToconnecttotheAWSManagementConsole,choosetheAWSlinkinthe
upper-left corner.
○ A new browser tab opens and connects you to the console.
Tip:Ifanewbrowsertabdoesnotopen,abanneroriconisusuallyatthe top of
your browser with the message that your browser is preventing the site
from opening pop-up windows. Choose the banner or icon, and then
choose Allow pop-ups.
3. ArrangetheAWSManagementConsoletabsothatitdisplaysalongsidethese
instructions.Ideally,youwillbeabletoseebothbrowsertabsatthesametime, to
make it easier to follow the lab steps.

Getting Credit for your work


Attheendofthislabyouwillbeinstructedtosubmitthelabtoreceiveascorebasedon your
progress.

Tip:Thescriptthatchecksyouworksmayonlyawardpointsifyounameresourcesand set
configurations as specified. In particular, values in these instructions that appear in This
Formatshould be entered exactly as documented (case-sensitive).

Task1:CreateaNewEBS Volume
Inthistask,youwillcreateandattachanAmazonEBSvolumetoanewAmazonEC2 instance.

4. IntheAWSManagementConsole,inthesearchboxnexttoServices,searchfor and
select EC2.

5. In the left navigation pane, choose Instances.


An Amazon EC2 instance named Lab has already been launched for your lab.

6. NotetheAvailabilityZone oftheinstance.It willlooksimilarto us-east-1a.

7. In the left navigation pane, choose Volumes.


YouwillseeanexistingvolumethatisbeingusedbytheAmazonEC2instance. This
volume has a size of 8 GiB, which makes it easy to distinguish from the volume
you will create next, which will be 1 GiB in size.

8. ChooseCreatevolumethen configure:
○ VolumeType:GeneralPurposeSSD(gp2)
○ Size(GiB):1.NOTE:Youmayberestrictedfromcreatinglarge volumes.
○ AvailabilityZone:SelectthesameavailabilityzoneasyourEC2
instance.
○ ChooseAddtag
○ IntheTagEditor,enter:
■ Key:Name
■ Value:MyVolume

9. ChooseCreateVolume
Yournewvolumewillappearinthelist,andwillmovefromtheCreatingstateto the
Available state. You may need to choose refresh to see your new volume.

Task2:AttachtheVolumetoan
Instance
In this task you will attach the new EBS volume to the Amazon EC2 instance.

10. Select My Volume.

11. In the Actions menu, choose Attach volume.

12. Choose the Instance field, then select the Lab instance.
NotethattheDevicenameissetto/dev/sdf.Noticealsothemessagedisplayed that
"Newer Linux kernels may rename your devices to /dev/xvdf through
/dev/xvdpinternally,evenwhenthedevicenameenteredhere(andshowninthe details)
is /dev/sdf through /dev/sdp."

13. ChooseAttachvolume
The volume state is now In-use.

Task3:ConnecttoYourAmazon
EC2 Instance
Inthistask,youwillconnecttotheEC2instanceusingEC2InstanceConnectwhich provides
access to a terminal in the browser.

14. IntheAWSManagementConsole,inthesearchboxnexttoServices,searchfor and


select EC2.

15. Choose Instances.

16. Select the Lab instance, and then choose Connect.

17. On the EC2 Instance Connect tab, choose Connect.


An EC2 Instance Connect terminal session opens and displays a $prompt.
Task4:CreateandConfigure Your
File System
Inthistask,youwilladdthenewvolumetoaLinuxinstanceasanext3filesystem under the
/mnt/data-store mount point.

Viewthestorageavailableonyourinstance: Run
the following command:

df-h
Filesystem
SizeUsedAvailUse%Mountedondev
tmpfs 4.0M 04.0M0% /dev
tmpfs 475M
0475M0%/dev/shmtmpfs
190M2.8M188M2% /run
/dev/xvda1 8.0G1.6G6.5G20%
/tmpfs 475M 0475M0%/tmp
tmpfs 95M 095M0%/run/user/1000

The output shows the original 8GB /dev/xvda1 disk volume mounted at /which
indicatesthatitistherootvolume.IthoststheLinuxoperatingsystemoftheEC2 instance.
The 1GB other volume that you attached to the Lab instance is not listed,
becauseyouhavenotyetcreatedafilesystemonitormountedthedisk.Those actions
are necessary so that the Linux operating system can make use of the
newstoragespace.Youwilltakethoseactions next.

Create an ext3 file system on the new volume:

sudomkfs-text3/dev/sdf

Theoutputshouldindicatethatanewfilesystemwascreatedontheattached volume.
Create a directory for mounting the new storage volume:

sudomkdir/mnt/data-store

Mount the new volume:

sudomount/dev/sdf/mnt/data-store

ToconfiguretheLinuxinstancetomountthisvolumewhenevertheinstanceis started,
you will need to add a line to /etc/fstab. Run the command below to accomplish
that:

echo"/dev/sdf/mnt/data-storeext3defaults,noatime12"|sudotee-a/etc/fstab

Viewthe configurationfile tosee thesetting onthe last line:

cat/etc/fstab

Viewtheavailablestorageagain:
df-h

The output will look similar to what is shown below.

Filesystem
SizeUsedAvailUse%Mountedondev
tmpfs 484M 0484M0% /dev
tmpfs 492M
0492M0%/dev/shmtmpfs
492M460K491M1% /run
tmpfs 492M 0492M0% /sys/fs/cgroup
/dev/xvda1 8.0G1.5G6.6G19% /tmpfs
99M 099M0%/run/user/0
tmpfs 99M 099M0%/run/user/1000
/dev/xvdf 976M1.3M924M1%/mnt/data-store

Noticethelastline.Theoutputnowlists/dev/xvdfwhichisthenewmounted volume.

On your mounted volume, create a file and add some text to it.

sudosh-c"echosometexthasbeenwritten>/mnt/data-store/file.txt"

Verifythatthetexthasbeenwrittentoyourvolume.

cat/mnt/data-store/file.txt
LeavetheEC2InstanceConnectsessionrunning.Youwillreturntoitlater
in this lab.

Task5:CreateanAmazonEBS
Snapshot
In this task, you will create a snapshot of your EBS volume.

Youcancreateanynumberofpoint-in-time,consistentsnapshotsfromAmazonEBS volumes
at any time. Amazon EBS snapshots are stored in Amazon S3 with high
durability.NewAmazonEBSvolumescanbecreatedoutofsnapshotsforcloningor restoring
backups. Amazon EBS snapshots can also be easily shared among AWS users or
copied over AWS regions.

26. IntheEC2Console,chooseVolumesandselectMy Volume.

27. In the Actions menu, selectCreate snapshot.

28. ChooseAddtagthenconfigure:
○ Key:Name
○ Value:MySnapshot
○ ChooseCreatesnapshot
29. In the left navigation pane, choose Snapshots.
Yoursnapshotisdisplayed.ThestatuswillfirsthaveastateofPending,which means
that the snapshot is being created. It will then change to a state of Completed.
Note:Onlyusedstorageblocksarecopiedtosnapshots,soemptyblocksdonot occupy
any snapshot storage space.

30. InyourEC2InstanceConnectsession,deletethefilethatyoucreatedonyour
volume.
31. sudorm/mnt/data-store/file.txt
32.
33. Verifythatthefilehasbeen deleted.
34. ls/mnt/data-store/
35. Yourfilehasbeendeleted.

Task6:RestoretheAmazon
EBS Snapshot
Ifyoueverwishtoretrievedatastoredinasnapshot,youcanRestorethesnapshotto a new
EBS volume.

CreateaVolumeUsingYourSnapshot
32. In the EC2 console, select My Snapshot.

33. Inthe Actions menu, select Create volume from snapshot.

34. ForAvailabilityZone,selectthesame availabilityzonethatyouused earlier.

35. ChooseAddtagthenconfigure:
○ Key:Name
○ Value:RestoredVolume
○ ChooseCreatevolume
36. Note:Whenrestoringasnapshottoanewvolume,youcanalsomodifythe
configuration, such as changing the volume type, size or Availability Zone.

AttachtheRestoredVolumetoYourEC2 Instance
36. In the left navigation pane, choose Volumes.

37. Select Restored Volume.

38. In the Actions menu, select Attach volume.

39. Choose the Instance field, then select the Lab instance that appears.
NotethattheDevicefieldissetto/dev/sdg.Youwillusethisdeviceidentifierina later
task.

40. ChooseAttachvolume
The volume state is now in-use.
Mount the Restored Volume

41. Create a directory for mounting the new storage volume:


42. sudomkdir/mnt/data-store2
43. Mount the new volume:
44. sudomount/dev/sdg/mnt/data-store
45. Verifythatthevolumeyoumountedhasthefilethatyoucreatedearlier.
46. ls/mnt/data-store2/
47. Youshouldseefile.txt.
Conclusion:-we are successfully _ able to:

● Create an Amazon EBS volume


● Attach and mount your volume to an EC2 instance
● Create a snapshot of your volume
● Create a new volume from your snapshot
● Attach and mount the new volume to your EC2 instance
EXPERIMENT NO.7
Name : Rishit Ravichandran
PRN : 121A1088
BATCH : D1

AIM:-To study and Implement Database as a Service on SQL/NOSQL databases like AWS
RDS, AZURE SQL/ MongoDB Lab/ Firebase.

Lab 5: Build Your DB Server and Interact With Your DB Using an App

Lab Overview and objectives

This lab is designed to reinforce the concept of leveraging an AWS-managed database instance
for solving relational database needs.

Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale
a relational database in the cloud. It provides cost-efficient and resizable capacity while
managing time-consuming database administration tasks, which allows you to focus on your
applications and business. Amazon RDS provides you with six familiar database engines to
choose from: Amazon Aurora, Oracle, Microsoft SQL Server, PostgreSQL, MySQL and
MariaDB.

By the end of this lab, you will be able to:

● Launch an Amazon RDS DB instance with high availability.


● Configure the DB instance to permit connections from your web server.
● Open a web application and interact with your database.
Duration

This lab takes approximately 30 minutes.

AWS service restrictions

In this lab environment, access to AWS services and service actions might be restricted to the
ones that are needed to complete the lab instructions. You might encounter errors if you attempt
to access other services or perform actions beyond the ones that are described in this lab.

Accessing the AWS Management Console

1. At the top of these instructions, choose Start Lab.


○ The lab session starts.
○ A timer displays at the top of the page and shows the time remaining in the
session.
Tip: To refresh the session length at any time, choose Start Lab again before the
timer reaches 0:00.
○ Before you continue, wait until the circle icon to the right of the AWS link in the
upper-left corner turns green.

2. To connect to the AWS Management Console, choose the AWS link in the upper-left
corner.
○ A new browser tab opens and connects you to the console.
Tip: If a new browser tab does not open, a banner or icon is usually at the top of
your browser with the message that your browser is preventing the site from
opening pop-up windows. Choose the banner or icon, and then choose Allow
pop-ups.

3. Arrange the AWS Management Console tab so that it displays along side these
instructions. Ideally, you will be able to see both browser tabs at the same time, to make it
easier to follow the lab steps.

Getting Credit for your work


At the end of this lab you will be instructed to submit the lab to receive a score based on your
progress.

Tip: The script that checks you works may only award points if you name resources and set
configurations as specified. In particular, values in these instructions that appear in This Format
should be entered exactly as documented (case-sensitive).

Task 1: Create a Security Group for the RDS DB Instance

In this task, you will create a security group to allow your web server to access your RDS DB
instance. The security group will be used when you launch the database instance.

4. In the AWS Management Console, in the search box next to Services , search for and
select VPC.

5. In the left navigation pane, choose Security groups.

6.

7. Choose Create security group and then configure:


○ Security group name: DB Security Group
○ Description: Permit access from Web Security Group
○ VPC: Lab VPC
Tip: Choose the X next to VPC that is already selected, then choose Lab VPC
from the menu.
8. In the Inbound rules pane, choose Add rule
The security group currently has no rules. You will add a rule to permit access from the
Web Security Group.

9. Configure the following settings:


○ Type: MySQL/Aurora (3306)
○ Source: Place you cursor in the field to the right of Custom, type sg, and then
select Web Security Group.
10. This configures the Database security group to permit inbound traffic on port 3306 from
any EC2 instance that is associated with the Web Security Group.

11. Choose Create security group


You will use this security group when launching an Amazon RDS database in this lab.

Task 2: Create a DB Subnet Group

In this task, you will create a DB subnet group that is used to tell RDS which subnets can be used
for the database. Each DB subnet group requires subnets in at least two Availability Zones.

10. In the AWS Management Console, in the search box next to Services , search for and
select RDS.

11. In the left navigation pane, choose Subnet groups.


If the navigation pane is not visible, choose the menu icon in the top-left corner.

12. Choose Create DB Subnet Group then configure:


○ Name: DB-Subnet-Group
○ Description: DB Subnet Group
○ VPC: Lab VPC

13. Scroll down to the Add subnets section.

14. Expand the list of values under Availability Zones and select the first two zones:
us-east-1a and us-east-1b.

15. Expand the list of values under Subnets and select the subnets associated with the CIDR
ranges 10.0.1.0/24 and 10.0.3.0/24.
These subnets should now be shown in the Subnets selected table.
16. Choose Create
You will use this DB subnet group when creating the database in the next task.

Task 3: Create an Amazon RDS DB Instance


In this task, you will configure and launch a Multi-AZ Amazon RDS deployment of a MySQL
database instance.

Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database
(DB) instances, making them a natural fit for production database workloads. When you
provision a Multi-AZ DB instance, Amazon RDS automatically creates a primary DB instance
and synchronously replicates the data to a standby instance in a different Availability Zone (AZ).

17. In the left navigation pane, choose Databases.

18. Choose Create database


If you see Switch to the new database creation flow at the top of the screen, please
choose it.

19. Select MySQL under Engine Options.

20. Under Templates choose Dev/Test.

21. Under Availability and durability choose Multi-AZ DB instance.


22. Under Settings, configure:
○ DB instance identifier: lab-db
○ Master username: main
○ Master password: lab-password
○ Confirm password: lab-password

23. Under DB instance class, configure:


○ Select Burstable classes (includes t classes).
○ Select db.t3.micro

24. Under Storage, configure:


○ Storage type: General Purpose (SSD)
○ Allocated storage: 20

25. Under Connectivity, configure:


○ Virtual Private Cloud (VPC): Lab VPC

26. Under Existing VPC security groups, from the dropdown list:
○ Choose DB Security Group.
○ Deselect default.
27. Under Monitoring expand Additional configuration.
○ Uncheck Enable Enhanced monitoring.

28. Under Additional configuration, configure:


○ Initial database name: lab
○ Uncheck Enable automatic backups.
○ Uncheck Enable encryption
29. This will turn off backups, which is not normally recommended, but will make the
database deploy faster for this lab.
30. Choose Create database
Your database will now be launched.
If you receive an error that mentions "not authorized to perform: iam:CreateRole", make
sure you unchecked Enable Enhanced monitoring in the previous step.
31. Choose lab-db (choose the link itself).
You will now need to wait approximately 4 minutes for the database to be available.
The deployment process is deploying a database in two different Availability zones.
While you are waiting, you might want to review the Amazon RDS FAQs or grab a cup
of coffee.

32. Wait until Info changes to Modifying or Available.

33. Scroll down to the Connectivity & security section and copy the Endpoint field.
It will look similar to: lab-db.xxxx.us-east-1.rds.amazonaws.com.

34. Paste the Endpoint value into a text editor. You will use it later in the lab.

lab-db.cremiigiuezw.us-east-1.rds.amazonaws.com
Port
3306

Task 4: Interact with Your Database

In this task, you will open a web application running on a web server that has been created for
you. You will configure it to use the database that you just created.
34.
35. To discover the WebServer IP address, choose on the AWS Details drop down menu
above these instructions. Copy the IP address value.

36. Open a new web browser tab, paste the WebServer IP address and press Enter.
The web application will be displayed, showing information about the EC2 instance.

37. Choose the RDS link at the top of the page.


You will now configure the application to connect to your database.

38. Configure the following settings:


○ Endpoint: Paste the Endpoint you copied to a text editor earlier
○ Database: lab
○ Username: main
○ Password: lab-password
○ Choose Submit

39. A message will appear explaining that the application is running a command to copy
information to the database. After a few seconds the application will display an Address
Book.
The Address Book application is using the RDS database to store information.

40. Test the web application by adding, editing and removing contacts.
The data is being persisted to the database and is automatically replicating to the second
Availability Zone.
Submitting your work

39. To record your progress, choose Submit at the top of these instructions.

40. When prompted, choose Yes.


After a couple of minutes, the grades panel appears and shows you how many points you
earned for each task. If the results don't display after a couple of minutes, choose Grades
at the top of these instructions.
Tip: You can submit your work multiple times. After you change your work, choose
Submit again. Your last submission is recorded for this lab.

41. To find detailed feedback about your work, choose Submission Report.
Tip: For any checks where you did not receive full points, there are sometimes helpful
details provided in the submission report.

Lab Complete
Conclusion:- Thus we have successfully completed building DB Server and Interacting
With Your DB Using an App
Name : Rishit Ravichandran
PRN : 121A1088
Batch : D1

Experiment No.9

Lab 1: Introduction to AWS IAM


AWS Identity and Access Management (IAM) is a web service that enables Amazon
Web Services (AWS) customers to manage users and user permissions in AWS. With
IAM, you can centrally manage users, security credentials such as access keys, and
permissions that control which AWS resources users can access.

Lab overview and objectives


This lab will demonstrate:

● Exploring pre-created IAM Users and Groups


● Inspecting IAM policies as applied to the pre-created groups
● Following a real-world scenario, adding users to groups with specific
capabilities enabled
● Locating and using the IAM sign-in URL
● Experimenting with the effects of policies on service access

AWS service restrictions


In this lab environment, access to AWS services and service actions might be restricted
to the ones that are needed to complete the lab instructions. You might encounter
errors
if you attempt to access other services or perform actions beyond the ones that are
described in this lab.
AWS Identity and Access
Management
AWS Identity and Access Management (IAM) can be used to:

● Manage IAM Users and their access: You can create Users and assign them
individual security credentials (access keys, passwords, and multi-factor
authentication devices). You can manage permissions to control which
operations a User can perform.
● Manage IAM Roles and their permissions: An IAM Role is similar to a User, in
that it is an AWS identity with permission policies that determine what the identity
can and cannot do in AWS. However, instead of being uniquely associated with
one person, a Role is intended to be assumable by anyone who needs it.
● Manage federated users and their permissions: You can enable identity
federation to allow existing users in your enterprise to access the AWS
Management Console, to call AWS APIs and to access resources, without the
need to create an IAM User for each identity.

IAM (Identity and Access Management) in AWS is a crucial service that allows you to
manage access to AWS resources securely. It enables you to control who can access
your AWS resources (authentication) and what actions they can perform (authorization).
Here's an overview along with some key points and best practices:

Identity:
● IAM allows you to create and manage users, groups, and roles to
represent the people, services, and applications that interact with your
AWS resources.
● Users: Represent individual people and can be assigned security
credentials such as passwords or access keys.
● Groups: Collections of IAM users. Permissions can be assigned to groups
rather than individual users, which simplifies permissions management.
● Roles: AWS IAM roles are a way to delegate permissions to entities that
you trust. For example, you can create roles for applications running on EC2
instances, AWS Lambda functions, or for cross-account access. Access
Management:
● IAM provides fine-grained access control using policies. Policies are JSON
documents that define permissions.
● Policies can be attached to users, groups, or roles, and they specify the
actions allowed or denied on AWS resources.
● IAM policies follow the principle of least privilege, meaning users should
have only the permissions necessary to perform their tasks.
Security Features:
● Multi-Factor Authentication (MFA): Adds an extra layer of security to user
sign-ins and API calls. Requires users to present two or more forms of
identification (factors).
● Access Keys: IAM users can have access keys associated with their
account for programmatic access to AWS services.
● Password Policies: IAM allows you to enforce password policies such as
minimum length, complexity requirements, and rotation policies. Best
Practices:
● Use IAM Roles for EC2 Instances: Instead of storing access keys directly
on EC2 instances, use IAM roles to securely grant permissions to EC2
instances.
● Regularly review and update permissions: Regularly review IAM policies
and permissions to ensure they align with the principle of least privilege. ●
Enable MFA: Enable MFA for IAM users to add an extra layer of security to
account logins.
● Use IAM Conditions: Use IAM conditions to further refine access control
based on various factors such as IP address, time of day, etc. Audit and
Monitoring:
● IAM provides logging capabilities that allow you to monitor actions
performed with IAM entities.
● Use AWS CloudTrail to capture all API calls made by or on behalf of an
AWS account. This can help in audit and compliance efforts.
Integration with Other AWS Services:
● IAM integrates with many other AWS services such as Amazon S3, EC2,
RDS, etc., allowing you to control access to these services using IAM
policies.

Task 1: Explore the Users and


Groups
In this task, you will explore the Users and Groups that have already been created for
you in IAM.

4. In the search box to the right of Services, search for and choose IAM to open
the IAM console.
5. In the navigation pane on the left, choose Users.
The following IAM Users have been created for you:
○ user-1
○ user-2
○ user-3

6. Choose the user-1 link.


This will bring to a summary page for user-1. The Permissions tab will be
displayed.

7. Notice that user-1 does not have any permissions.

8. Choose the Groups tab.


user-1 also is not a member of any groups.

9. Choose the Security credentials tab.


user-1 is assigned a Console password.
10.In the navigation pane on the left, choose User groups.
The following groups have already been created for you:
○ EC2-Admin
○ EC2-Support
○ S3-Support

11. Choose the EC2-Support group link.


This will bring you to the summary page for the EC2-Support group.

12.Choose the Permissions tab.


This group has a Managed Policy associated with it, called
AmazonEC2ReadOnlyAccess. Managed Policies are pre-built policies (built
either by AWS or by your administrators) that can be attached to IAM Users and
Groups. When the policy is updated, the changes to the policy are immediately
apply against all Users and Groups that are attached to the policy.

13.Choose the plus (+) icon next to the AmazonEC2ReadOnlyAccess policy to view
the policy details.
Note: A policy defines what actions are allowed or denied for specific AWS
resources.
This policy is granting permission to List and Describe information about EC2,
Elastic Load Balancing, CloudWatch and Auto Scaling. This ability to view
resources, but not modify them, is ideal for assigning to a Support role.
The basic structure of the statements in an IAM Policy is:
○ Effect says whether to Allow or Deny the permissions.
○ Action specifies the API calls that can be made against an AWS Service
(eg cloudwatch:ListMetrics).
○ Resource defines the scope of entities covered by the policy rule (eg a
specific Amazon S3 bucket or Amazon EC2 instance, or * which means
any resource).
{
○ "Version": "2012-10-17",
○ "Statement": [
○{
○ "Effect": "Allow",
○ "Action": "ec2:Describe*",
○ "Resource": "*"
○ },
○{
○ "Effect": "Allow",
○ "Action": "elasticloadbalancing:Describe*",
○ "Resource": "*"
○ },
○{
○ "Effect": "Allow",
○ "Action": [
○ "cloudwatch:ListMetrics",
○ "cloudwatch:GetMetricStatistics",
○ "cloudwatch:Describe*"
○ ],
○ "Resource": "*"
○ },
○{
○ "Effect": "Allow",
○ "Action": "autoscaling:Describe*",
○ "Resource": "*"
○}
○]
○}

14.Choose the minus icon (-) to hide the policy details.
15.In the navigation pane on the left, choose User groups.

16.Choose the S3-Support group link and then choose the Permissions tab.
The S3-Support group has the AmazonS3ReadOnlyAccess policy attached.

Choose the plus (+) icon to view the policy details.


This policy grants permissions to Get and List resources in Amazon S3.
{

"Version": "2012-10-17",

"Statement": [

"Effect": "Allow",

"Action": [

"s3:Get*",

"s3:List*",

"s3:Describe*",

"s3-object-lambda:Get*",

"s3-object-lambda:List*"
],

"Resource": "*"

18.Choose the minus icon (-) to hide the policy details.

19.In the navigation pane on the left, choose User groups.


20.Choose the EC2-Admin group link and then choose the Permissions tab. This
Group is slightly different from the other two. Instead of a Managed Policy, it has
an Inline Policy, which is a policy assigned to just one User or Group. Inline
Policies are typically used to apply permissions for one-off situations.

Choose the plus (+) icon to view the policy details.


This policy grants permission to view (Describe) information about Amazon EC2
and also the ability to Start and Stop instances.
{

"Version": "2012-10-17",

"Statement": [

"Action": [

"ec2:Describe*",

"ec2:StartInstances",

"ec2:StopInstances"

],

"Resource": [

"*"

],

"Effect": "Allow"

}
]

22.Choose the minus icon (-) to hide the policy details.

Business Scenario
For the remainder of this lab, you will work with these Users and Groups to enable
permissions supporting the following business scenario:

Your company is growing its use of Amazon Web Services, and is using many Amazon
EC2 instances and a great deal of Amazon S3 storage. You wish to give access to new
staff depending upon their job function:
User In Group Permissions

user-1 S3-Support Read-Only access to Amazon S3

user-2 EC2-Support Read-Only access to Amazon EC2

user-3 EC2-Admin View, Start and Stop Amazon EC2 instances

Task 2: Add Users to Groups


You have recently hired user-1 into a role where they will provide support for Amazon
S3. You will add them to the S3-Support group so that they inherit the necessary
permissions via the attached AmazonS3ReadOnlyAccess policy.

You can ignore any "not authorized" errors that appear during this task. They are
caused by your lab account having limited permissions and will not impact your ability to
complete the lab.

Add user-1 to the S3-Support Group


23.In the left navigation pane, choose User groups.

24.Choose the S3-Support group link.


25.Choose the Users tab.

26.In the Users tab, choose Add users.

27.In the Add Users to S3-Support window, configure the following:


○ Select user-1.
○ At the bottom of the screen, choose Add users.
In the Users tab you will see that user-1 has been added to the group.

Add user-2 to the EC2-Support Group You have hired


user-2 into a role where they will provide support for Amazon EC2.

28.Using similar steps to the ones above, add user-2 to the EC2-Support group.
user-2 should now be part of the EC2-Support group.

Add user-3 to the EC2-Admin Group


You have hired user-3 as your Amazon EC2 administrator, who manage your EC2
instances.

29.Using similar steps to the ones above, add user-3 to the EC2-Admin group.
user-3 should now be part of the EC2-Admin group.
30.In the navigation pane on the left, choose User groups.
Each Group should now have a 1 in the Users column, indicating the number of
Users in each Group.
If you do not have a 1 beside each group, revisit the above instructions above to
ensure that each user is assigned to a User group, as shown in the table in the
Business Scenario section.
Task 3: Sign-In and Test Users
In this task, you will test the permissions of each IAM User.

31.In the navigation pane on the left, choose Dashboard.


A Sign-in URL for IAM users in this account link is displayed on the right. It
will look similar to: https://123456789012.signin.aws.amazon.com/console
This link can be used to sign-in to the AWS Account you are currently using.

32.Copy the Sign-in URL for IAM users in this account to a text editor.

https://058264095403.signin.aws.amazon.com/console

33.Open a private (Incognito) window.


Mozilla Firefox
○ Choose the menu bars at the top-right of the screen
○ Select New private window
34.Google Chrome
○ Choose the ellipsis at the top-right of the screen
○ Select New Incognito Window
35.Microsoft Edge
○ Choose the ellipsis at the top-right of the screen
○ Choose New InPrivate window
36.Microsoft Internet Explorer
○ Choose the Tools menu option
○ Choose InPrivate Browsing

37.Paste the IAM users sign-in link into the address bar of your private browser
session and press Enter.
Next, you will sign-in as user-1, who has been hired as your Amazon S3 storage
support staff.

38.Sign-in with:
○ IAM user name: user-1
○ Password: Lab-Password1

39.In the search box to the right of Services, search for and choose S3 to open the
S3 console.

40.Choose the name of the bucket that exists in the account and browse the
contents.
Since your user is part of the S3-Support Group in IAM, they have permission to
view a list of Amazon S3 buckets and the contents.
Note: The bucket does not contain any objects.
Now, test whether they have access to Amazon EC2.
41.In the search box to the right of Services, search for and choose EC2 to open the EC2
console.

42.In the left navigation pane, choose Instances.


You cannot see any instances. Instead, you see a message that states You are
not authorized to perform this operation. This is because this user has not been
granted any permissions to access Amazon EC2.

You will now sign-in as user-2, who has been hired as your Amazon EC2 support
person.

43.Sign user-1 out of the AWS Management Console by completing the following
actions:
○ At the top of the screen, choose user-1
○ Choose Sign Out
44.
45.Paste the IAM users sign-in link into your private browser tab's address bar and
press Enter.
Note: This link should be in your text editor.

46.Sign-in with:
○ IAM user name: user-2
○ Password: Lab-Password2
47.In the search box to the right of Services, search for and choose EC2 to open
the EC2 console.

48.In the navigation pane on the left, choose Instances.


You are now able to see an Amazon EC2 instance because you have Read Only
permissions. However, you will not be able to make any changes to Amazon EC2
resources.
If you cannot see an Amazon EC2 instance, then your Region may be incorrect.
In the top-right of the screen, pull-down the Region menu and select the region
that you noted at the start of the lab (for example, N. Virginia).

○ Select the instance named LabHost.

49.In the Instance state menu above, select Stop instance.


50.In the Stop Instance window, select Stop.
You will receive an error stating You are not authorized to perform this operation.
This demonstrates that the policy only allows you to view information, without
making changes.

51.Choose the X to close the Failed to stop the instance message.


Next, check if user-2 can access Amazon S3.

52.In the search box to the right of Services, search for and choose S3 to open the
S3 console.
You will see the message You don't have permissions to list buckets because
user-2 does not have permission to access Amazon S3.
You will now sign-in as user-3, who has been hired as your Amazon EC2
administrator.

53.Sign user-2 out of the AWS Management Console by completing the following
actions:
○ At the top of the screen, choose user-2
○ Choose Sign Out
54. 55.Paste the
IAM users sign-in link into your private window and press Enter.
56.Paste the sign-in link into the address bar of your private web browser tab again.
If it is not in your clipboard, retrieve it from the text editor where you stored it
earlier.

57.Sign-in with:
○ IAM user name: user-3
○ Password: Lab-Password3

58.In the search box to the right of Services, search for and choose EC2 to open
the EC2 console.

59.In the navigation pane on the left, choose Instances.


As an EC2 Administrator, you should now have permissions to Stop the Amazon
EC2 instance.
Select the instance named LabHost .
If you cannot see an Amazon EC2 instance, then your Region may be incorrect.
In the top-right of the screen, pull-down the Region menu and select the region
that you noted at the start of the lab (for example, N. Virginia).

60.In the Instance state menu, choose Stop instance.


61.In the Stop instance window, choose Stop.
The instance will enter the stopping state and will shutdown.

62.Close your private browser window.


Name : Rishit Ravichandran
PRN : 121A1088
BATCH: D-1
EXPERIMENT 10

Aim: To study and Implement Containerization using Docker.

Theory:

1. Containerization

Containerization is OS-based virtualization that creates multiple virtual units in the user space,
known as Containers. Containers share the same host kernel but are isolated from each other
through private namespaces and resource control mechanisms at the OS level. Container-based
Virtualization provides a different level of abstraction in terms of virtualization and isolation
when compared with hypervisors. Hypervisors use a lot of hardware which results in overhead in
terms of virtualizing hardware and virtual device drivers.

A full operating system (e.g -Linux, Windows) runs on top of this virtualized hardware in each
virtual machine instance. But in contrast, containers implement isolation of processes at the
operating system level, thus avoiding such overhead. These containers run on top of the same
shared operating system kernel of the underlying host machine and one or more processes can be
run within each container.
In containers you don’t have to pre-allocate any RAM, it is allocated dynamically during the
creation of containers while in VMs you need to first pre-allocate the memory and then create the
virtual machine. Containerization has better resource utilization compared to VMs and a short
boot-up process. It is the next evolution in virtualization.

Containers can run virtually anywhere, greatly easy development and deployment: on Linux,
Windows, and Mac operating systems; on virtual machines or bare metal, on a developer’s
machine or in data centers on-premises; and of course, in the public cloud. Containers virtualize
CPU, memory, storage, and network resources at the OS level, providing developers with a
sandboxed view of the OS logically isolated from other applications.

2. Containerization using Docker

Docker is the containerization platform that is used to package your application and all its
dependencies together in the form of containers to make sure that your application works
seamlessly in any environment which can be developed or tested or in production. Docker is a tool
designed to make it easier to create, deploy, and run applications by using containers.
Docker is the world’s leading software container platform. It was launched in 2013 by a company
called Dotcloud, Inc which was later renamed Docker, Inc. It is written in the Go language. It has
been just six years since Docker was launched yet communities have already shifted to it from
VMs. Docker is designed to benefit both developers and system administrators by making it a part
of many DevOps toolchains. Developers can write code without worrying about the testing and
production environment.

3. Docker Architecture

Docker architecture consists of Docker client, Docker Daemon running on Docker Host, and
Docker Hub repository. Docker has client-server architecture in which the client communicates
with the Docker Daemon running on the Docker Host using a combination of REST APIs, Socket
IO, and TCP. If we must build the Docker image, then we use the client to execute the build
command to Docker Daemon. Then Docker Daemon builds an image based on given inputs and
saves it into the Docker registry. If you don’t want to create an image, then just execute the pull
command from the client and then Docker Daemon will pull the image from the Docker Hub
finally if we want to run the image then execute the run command from the client which will create
the container.
4. Configuring MySQL in Docker
5. Configuring Python in Docker

Pipfile
Python app.py file

Dockerfile

Hello World output


Python Container

6. Connecting Python and MYSQL in Docker


Docker-compose file

Docker file
Python file

Running in command prompt


Container created and connection established

Conclusion: Thus, we have successfully implemented containerization using Docker.


Name : Rishit Ravichandran
PRN : 121A1088
Batch : D-1

Experiment 11

Aim: To study and implement container orchestration using kubernetes.

THEORY : WHAT IS CONTAINER ORCHESTRATION?


Container orchestration automatically provisions, deploys, scales, and manages containerized applications
without worrying about the underlying infrastructure. Developers can implement container orchestration
anywhere containers are, allowing them to automate the life cycle management of containers.

What is kubernetes?
Kubernetes automates operational tasks of container management and includes built-in commands for deploying
applications, rolling out changes to your applications, scaling your applications up and down to fit changing
needs, monitoring your applications, and more—making it easier to manage applications.

Components of kubernetes?
• Kubernetes Control Plane. The Kubernetes control plane is the set of tools that manages clusters and the
workloads running on them.
• Kubernetes Nodes.
• Optional Kubernetes Extensions.
• The API Server.
• The Kubernetes Scheduler.
• The Kubernetes Controller.
• Etcd.
• Kubernetes Pods.

What is minikube ?
Minikube is a tool that sets up a Kubernetes environment on a local PC or laptop. It’s technically a Kubernetes
distribution, but because it addresses a different type of use case than most other distributions (like Rancher,
OpenShift, and EKS), it’s more common to hear folks refer to it as a tool rather than a distribution.
PROCEDURE:
1. Installing Kubectl

1. Install kubectl binary with curl on Windows

curl -LO "https://dl.k8s.io/release/v1.23.0/bin/windows/amd64/kubectl.exe"

2. Validate the binary (optional)

Download the kubectl checksum file:

curl -LO "https://dl.k8s.io/v1.23.0/bin/windows/amd64/kubectl.exe.sha256"

Validate the kubectl binary against the checksum file:

1. Using Command Prompt to manually compare CertUtil's output to the checksum file
downloaded:

CertUtil -hashfile kubectl.exe SHA256


type kubectl.exe.sha256
1. Using PowerShell to automate the verification using the -eq operator to get a True or False
result:

$($(CertUtil -hashfile .\kubectl.exe SHA256)[1] -replace " ", "") -eq $(type .
\kubectl.exe.sha256)

2. Installing minikube

1. Download the latest release.

Or if using PowerShell, use this command:

New-Item -Path 'c:\' -Name 'minikube' -ItemType Directory -Force


Invoke-WebRequest -OutFile 'c:\minikube\minikube.exe' -Uri
'https://github.com/kubernetes/minikube/releases/latest/download/minikube-windows-
amd64.exe' -UseBasicParsing

2. Add the binary in to your PATH.

Make sure to run PowerShell as Administrator.

$oldPath = [Environment]::GetEnvironmentVariable('Path',
[EnvironmentVariableTarget]::Machine)
if ($oldPath.Split(';') -inotcontains 'C:\minikube'){ `
[Environment]::SetEnvironmentVariable('Path', $('{0};C:\minikube' -f $oldPath),
[EnvironmentVariableTarget]::Machine) `
}
3. Starting our Cluster

1. Initializing minikube

minikube start

2. Checking if it is correctly initialized

minikube status
3. minikube get node

3. Deploy Applications using Minikube

1. Create a sample deployment and expose it on port 8080:

kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4

kubectl expose deployment hello-minikube --type=NodePort --port=8080

kubectl get deployment


2. Deployment will soon show up when you run:

kubectl get services hello-minikube

kubectl get pods

4. Creating different pods

kubectl run nginx --image nginx

kubectl get pods


kubectl run siesgst --image mysql

kubectl get pods

5. Another method to deploy pods using yaml

The following is an example of a Deployment. It creates a ReplicaSet to bring up three


nginx Pods:

yamlpods.yml

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80

kubectl create -f yamlpods.yml

kubectl get pods

Detailed description of pods


kubectl describe deployment

You might also like