Cloud Computing

Download as pdf or txt
Download as pdf or txt
You are on page 1of 71

ARTIFICIAL INTELLIGENCE AND DATA SCIENCE

CCS335 – CLOUD COMPUTING LABORATORY


ACADEMIC YEAR 2023 – 2024 (ODD SEMESTER)
PANIMALAR INSTITUTE OF TECHNOLOGY

CHENNAI - 600123

REGISTER NO.

Certified that this is the bonafide record of work done by

in the III Year / V Semester B.Tech ARTIFICIAL

INTELLIGENCE AND DATA SCIENCE degree course, CCS335-CLOUD

COMPUTING LABORATORY during the academic year 2023-2024.

Signature of the HOD Signature of the Staff-In-Charge

Date:

Submitted for the Anna University Practical Examination held on


at Panimalar Institute of Technology, Chennai – 600123.

INTERNAL EXAMINER EXTERNAL EXAMINER


TABLE OF CONTENTS
EX.NO PAGE STAFF
DATE EXPERIMENT NAME MARKS
. NO SIGN.

1 Install Virtualbox/VMware/ Equivalent open


source cloud Workstation with different
flavours of Linux or Windows OS on top of
windows 8 and above.
Install a C compiler in the virtual machine
2 created using a virtual box and execute
Simple Programs
Install Google App Engine. Create a hello
3
world app and other simple web applications
using python/java.

Use the GAE launcher to launch the web


4
applications

Simulate a cloud scenario using CloudSim


5 and run a scheduling algorithm that is not
present in CloudSim.

6 Find a procedure to transfer the files from one


virtual machine to another virtual machine.

7 Install Hadoop single node cluster and run


simple applications like wordcount.

Creating and Executing Your First Container


8
Using Docker

9 Run a Container from Docker Hub

CONTENT BEYOND SYLLABUS ADDITIONAL EXPERIMENTS

1
Procedure to configure the Eucalyptus

Study and implementation of storage as a


2
service

3 Study of Amazon Web Services


INTRODUCTION TO CLOUD COMPUTING

Cloud computing is a technology that uses the internet and central remote servers to
maintain data and applications. It is a model for enabling ubiquitous, on-demand access to a
shared pool of configurable computing resources (e.g., computer networks, servers, storage,
applications and services) that can be rapidly provisioned and released with minimal
management effort. This cloud model promotes availability and is composed of five essential
characteristics, three service model and four deployment models.

Figure: 1 Cloud Architecture

A large-Scale distributed computing paradigm that is driven by economics of


scale, in which a pool of abstracted virtualized, dynamically scalable, managed computing
power storage, platforms and services are delivered on demand to external customers over the
Internet.

DEPLOYMENT MODELS

Private cloud

Private cloud is cloud infrastructure operated solely for a single organization. It may be
managed by the organization or by a third-party and may exist on premise or off premise.

4
Public cloud

The cloud infrastructure is made available to the general public or a large industry group and
is owned by an organization selling cloud services.

Hybrid cloud

The cloud infrastructure is a composition of two or more clouds (private, community, public)
that remain unique entities but are bound together by standardized or proprietary technology
that enables data and application portability (e.g. Cloud bursting for load-balancing between
clouds ).

Community cloud

The cloud infrastructure is shared by several organizations and supports a specific community
that has shared concerns (security, compliance, jurisdiction, etc.). It may be managed by the
organization or by a third-party and may exist on premise or off premise.

SERVICE MODELS

Software as a Service (SaaS)

The capability provided to the consumer is to use the provider‟s applications running
on a cloud infrastructure. The applications are accessible from various client devices through
either a thin client interface, such as a web browser (e.g., web-based email), or a program
interface. The consumer does not manage or control the underlying cloud infrastructure
including network, servers, operating systems, storage, or even individual application
capabilities, with the possible exception of limited user-specific application configuration
settings.

Platform as a Service (PaaS)

The capability provided to the consumer is to deploy onto the cloud infrastructure
consumer-created or acquired applications created using programming languages, libraries,
services, and tools supported by the provider. The consumer does not manage or control the
underlying cloud infrastructure including network, servers, operating systems, or storage, but
has control over the deployed applications and possibly configuration settings for the
application-hosting environment.

5
Figure: 2 Cloud Service Models

Infrastructure as a Service (IaaS)


The capability provided to the consumer is to provision processing, storage, networks,
and other fundamental computing resources where the consumer is able to deploy and run
arbitrary software, which can include operating systems and applications. The consumer does
not manage or control the underlying cloud infrastructure but has control over operating
systems, storage, and deployed applications; and possibly limited control of select networking
components (e.g., host firewalls).
ESSENTIAL CHARACTERISTICS

On-Demand self service

A consumer can unilaterally provision computing capabilities, such as server time


and network storage, as needed automatically without requiring human interaction with each
service provider.

Broad network access

Capabilities are available over the network and accessed through standard
mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile
phones, tablets, laptops and workstations).

Resource pooling

The provider's computing resources are pooled to serve multiple consumers using a
multi-tenant model, with different physical and virtual resources dynamically assigned and

6
reassigned according to consumer demand. There is a sense of location independence in that
the customer generally has no control or knowledge over the exact location of the provided
resources but may be able to specify location at a higher level of abstraction (e.g., country,
state or datacenter). Examples of resources include storage, processing, memory and network
bandwidth.

Rapid elasticity

Capabilities can be elastically provisioned and released, in some cases automatically,


to scale rapidly outward and inward commensurate with demand. To the consumer, the
capabilities available for provisioning often appear to be unlimited and can be appropriated in
any quantity at any time.

Measured service

Cloud systems automatically control and optimize resource use by leveraging a


metering capability at some level of abstraction appropriate to the type of service (e.g.
storage, processing, bandwidth and active user accounts). Resource usage can be monitored,
controlled and reported, providing transparency for the provider and consumer.

COMMON CHARACTERISTICS ( VIRTUALIZATION)

Virtualization describes a technology in which an application, guest operating system


or data storage is abstracted away from the true underlying hardware or
software.Virtualization has three characteristics that make it ideal for cloud computing:

Partitioning

In virtualization, many applications and operating systems (OSes) are supported in a


single physical system by partitioning (separating) the available resources.

Isolation

Each virtual machine is isolated from its host physical system and other virtualized
machines. Because of this isolation, if one virtual-instance crashes, it doesn‟t affect the other
virtual machines. In addition, data isn‟t shared between one virtual container and another.

Encapsulation

A virtual machine can be represented (and even stored) as a single file, so you can
identify it easily based on the service it provides. In essence, the encapsulated process could

7
be a business service. This encapsulated virtual machine can be presented to an application as
a complete entity. Therefore, encapsulation can protect each application so that it doesn‟t
interfere with another application.

Applications of virtualization

Virtualization can be applied broadly to just about everything that you could imagine:

 Memory
 Networks
 Storage
 Hardware
 Operating systems
 Applications

Forms of virtualization

Virtual memory- virtual memory is a memory management technique that is


implemented using both hardware and software.
 Software – It is a part of a computer system that consists of data or computer
instructions.

ADVANTAGES OF CLOUD COMPUTING



Reliability

Manageability

Strategic Edge

INTRODUCTION TO OPENSTACK

OpenStack is a software package that provides a cloud platform for Public and Private cloud
covering various use cases including Enterprise and Telecom. The main focus is on
Infrastructure as a Service (IaaS) cloud and additional services built upon IaaS. The services
developed by the community are available as tarballs to install them from source and are also
picked up and packaged to make it available for different Linux distributions or as part of
OpenStack distributions.

8
Map of OpenStack projects

Community

OpenStack is a community working towards one mission:

To produce a ubiquitous Open Source Cloud Computing platform that is easy to use, simple
to implement, interoperable between deployments, works well at all scales, and meets the
needs of users and operators of both public and private clouds.

OpenStack provides an ecosystem for collaboration. It has infrastructure for

 Code review
 Testing
 CI
 Version control
 Documentation
 a set of collaboration tools, like a wiki, IRC chanels, Etherpad and Ethercalc.

The four opens

The basic princinples of the OpenStack community are the four opens.

Open source
Open design
Open development
Open Community

9
EX NO: 1 Install Virtualbox/VMware/ Equivalent open source cloud
Workstation with different flavours of Linux or Windows OS on top
DATE: of windows 8 and above

AIM:

To write the procedure to install the virtualbox with different flavours


of Linux/Windows.

PROCEDURE:

Step 1: Run the Virtualbox steup and click the “Next” option.

Step 2: Repeat Click on “Next” option and Finally Click the option“Install”

Step 3: To Install “Ubuntu” as virtual machine in “Oracle VM Virtualbox”, Open the


Oracle VM Virtualbox Manager

10
Step 4: Create new virtual machine

Step 5: Select the memory size

Step 6: Create a virtual hard drive

11
Step 7: Select the type of hardware file

Step 8: Select the type of storage on physical hard drive

Step 9: Select the size of virtual hard drive

12
Step 10: Select the new virtual OS created and click on “Settings”

Step 11: Select “Storage” from the left panel of the window

Step 12: Click on the first icon “Add CD/DVD device” in Controller:IDE

13
Step 13: Select “Choose Disk” and Choose the virtual machine to be used and click “Open”

Step 14: Click “OK” and select “Start” to run the virtual machine

Step 15: To install ubuntu, Click 'Install Ubuntu' button

14
Step 16: Select 'Erase disk and install Ubuntu' option is selected and click 'Install
Now' button

Step 17: Click 'Continue' button for upcoming dialogue box

Step 18: Enter the name, username and password. Click 'Continue' button

15
Step 19: After completion of installation process, click on 'Restart Now' button.

Step 20: The same way, we can install windows OS .

OUTPUT:

(i) Ubuntu Operating System in Virtual Machine

(ii) Windows7 Operating System in Virtual Machine

RESULT:

Thus the virtualbox with different flavours of Linux/Windows has been Installed
Sucessfully.

16
EX NO: 2
INSTALLATION OF A C COMPILER IN THE VIRTUAL
DATE: MACHINE AND EXECUTING A SIMPLE PROGRAM

AIM:

To install a C compiler in the virtual machine and execute a simple program.

PROCEDURE:

STEP:1 Before install a C compiler in a virtual machine, we have Create a virtual machine
by opening the Kernal Virtual Machine (KVM). Open the installed virtual manager. As well
as install different Ubuntu and Windows OS in that virtual machine with different names.

STEP:2 Open the Ubuntu OS in our Virtual Machine.

STEP:3 Open TextEditor in Ubuntu OS and type a C program and save it in desktop to
execute.

17
STEP:4 To install the C compiler in Ubuntu OS, open the terminal and type the command.

$sudo apt-get install gcc

STEP:5 And then compile the C program and execute it.

OUTPUT:

RESULT:

Thus the C compiler is installed in the virtual machine and executed the program
Successfully.

18
EX NO: 3 INSTALLATION OF GOOGLE APP ENGINE AND
CREATE HELLO WORLD APP AND OTHER
DATE:
SIMPLE WEB SIMPLE WEB APPLICATION

AIM:

To install google app engine and create hello world app using java.

PROCEDURE:

 Create a project

Projects bundle code, VMs, and other resources together for easier development and
monitoring.

 Build and run your "Hello, world!" app

You will learn how to run your app using Cloud Shell, right in your browser. At the
end, you'll deploy your app to the web using the App Engine Maven plugin.

GCP (Google Cloud Platform) organizes resources into projects, which collect all of the
related resources for a single application in one place.

Begin by creating a new project or selecting an existing project for this tutorial.

Step1:

Select a project, or create a new one

Step2:

Using Cloud Shell

Cloud Shell is a built-in command-line tool for the console. You're going to use Cloud Shell
to deploy your app.

Open Cloud Shell

Open Cloud Shell by clicking the

Activate Cloud Shell button in the navigation bar in the upper-right corner of the console

Clone the sample code

Use Cloud Shell to clone and navigate to the "Hello World" code. The sample code is
cloned from your project repository to the Cloud Shell.

Note: If the directory already exists, remove the previous files before cloning:

rm -rf appengine-try-java

19
git clone \
https://github.com/GoogleCloudPlatform/appengine-try-java

cd appengine-try-java

Step3:

Configuring your deployment

You are now in the main directory for the sample code. You'll look at the files that
configure your application.

Exploring the application

Enter the following command to view your application code:

cat \

src/main/java/myapp/DemoServlet.java

This servlet responds to any request by sending a response containing the message Hello,
world!.

Exploring your configuration

For Java, App Engine uses XML files to specify a deployment's configuration.

Enter the following command to view your configuration file:

cat pom.xml

The helloworld app uses Maven, which means you must specify a Project Object Model, or
POM, which contains information about the project and configuration details used by Maven
to build the project.

Step4:

Testing your app

Test your app on Cloud Shell

Cloud Shell lets you test your app before deploying to make sure it's running as intended, just
like debugging on your local machine.

To test your app enter the following:

mvn appengine:run

20
Preview your app with "Web preview"

Your app is now running on Cloud Shell. You can access the app by clicking the
Web preview

button at the top of the Cloud Shell pane and choosing Preview on port 8080.

Terminating the preview instance

Terminate the instance of the application by pressing Ctrl+C in the Cloud Shell

Step5:

Deploying to App Engine

Create an application

To deploy your app, you need to create an app in a region:

gcloud app create

Note: If you already created an app, you can skip this step.

Deploying with Cloud Shell

Now you can use Cloud Shell to deploy your app.

First, set which project to use:

gcloud config set project \


<YOUR-PROJECT>
Then deploy your app:
mvn appengine:deploy

Visit your app

Congratulations! Your app has been deployed.

The default URL of your app is a subdomain on appspot.com that starts with your
project's ID: <your-project>.appspot.com.

Try visiting your deployed application.

View your app's status

You can check in on your app by monitoring its status on the App Engine dashboard.

Open the Navigation menu in the upper-left corner of the console.

Then, select the App Engine section

21
RESULT:

Thus google app engine is installed and hello world app using java created
successfully.

22
EX NO: 4 LAUNCH THE WEB APPLICATIONS USING GAE
LAUNCHER
DATE:

AIM:
To launch the web applications using gae launcher.

PROCEDURE:
Before you can host your website on Google App Engine:
1. Create a new Cloud Console project or retrieve the project ID of an existing project
to use:
Go to the Projects page (https://console.cloud.google.com/project)
Tip: You can retrieve a list of your existing project IDs with the gcloud command line
tool (#before_you_begin).
2. Install and then initialize the Google Cloud SDK:
Download the SDK (/sdk/docs)
Creating a website to host on Google App Engine
Basic structure for the project
This guide uses the following structure for the project:
app.yaml: Configure the settings of your App Engine application.
www/: Directory to store all of your static files, such as HTML, CSS, images, and
JavaScript.
css/: Directory to store stylesheets.
style.css: Basic stylesheet that formats the look and feel of your site.
images/: Optional directory to store images.
index.html: An HTML file that displays content for your website.
js/: Optional directory to store JavaScript files.
Other asset directories.

Creating the app.yaml _le


The app.yaml file is a configuration file that tells App Engine how to map URLs to your
static files. In the following steps, you will add handlers that will load www/index.html when
someone visits your website, and all static files will be stored in and called from the

23
www directory.
Create the app.yaml file in your application's root directory:
1. Create a directory that has the same name as your project ID. You can find
your project ID in the Console (https://console.cloud.google.com/).
2. In directory that you just created, create a file named app.yaml.
3. Edit the app.yaml file and add the following code to the file:

runtime: python27
api_version: 1
threadsafe: true
handlers:
- url: /
static_files: www/index.html
upload: www/index.html
- url: /(.*)
static_files: www/\1
upload: www/(.*)
More reference information about the app.yaml file can be found in the app.yaml reference
documentation (/appengine/docs/standard/python/config/appref). Creating the index.html
_le
Create an HTML file that will be served when someone navigates to the root page of
your website. Store this file in your www directory. Deploying your application to App
Engine
When you deploy your application files, your website will be uploaded to App Engine.
To deploy your app, run the following command from within the root directory of your
application where the app.yaml file is located: Optional flags:

Include the --project flag to specify an alternate Cloud Console project ID to


what you initialized as the default in the gcloud tool. Example: --project
[YOUR_PROJECT_ID]
Include the -v flag to specify a version ID, otherwise one is generated for you.
Example: -v [YOUR_VERSION_ID]
l>
ead>
<title>Hello, world!</title>
<link rel="stylesheet" type="text/css" href="/css/style.css">

24
head>
ody>
<h1>Hello, world!</h1>
<p>
This is a simple static HTML file that will be served from Google App
Engine.
</p>
body>
ml>

Deploying your application to App Engine


When you deploy your application files, your website will be uploaded to App Engine.
To deploy your app, run the following command from within the root directory of your
application where the app.yaml file is located:

id app deploy
Optional flags:
Include the --project flag to specify an alternate Cloud Console project ID to
what you initialized as the default in the gcloud tool. Example: --project
[YOUR_PROJECT_ID]
Include the -v flag to specify a version ID, otherwise one is generated for
you. Example: -v [YOUR_VERSION_ID

To learn more about deploying your app from the command line, see Deploying a Python
2 App (/appengine/docs/python/tools/uploadinganapp). Viewing your application

To launch your browser and view the app at https://PROJECT_ID.REGION_ID


(#appengine-urls).r.appspot.com, run the following command

ud app browse

RESULT:
Thus the web applications using gae launcher launched.

25
EX NO: 5 SIMULATE A CLOUD SCENARIO USING CLOUDSIM AND
RUN A SCHEDULING ALGORITHM THAT IS NOT PRESENT
DATE: IN CLOUDSIM

What is Cloudsim?
CloudSim is a simulation toolkit that supports the modelling and simulation of the core
functionality of cloud, like job/task queue, processing of events, creation of cloud entities
(datacentre, datacentre brokers, etc), communication between different entities,
implementation of broker policies, etc. This toolkit allows to:

 Test application services in a repeatable and controllable environment.


 Tune the system bottlenecks before deploying apps in an actual cloud.
 Experiment with different workload mix and resource performance scenarios on
simulated infrastructure for developing and testing adaptive application provisioning
techniques

Core features of CloudSim are:

 The Support of modelling and simulation of large scale computing environment as


federated cloud data centres, virtualized server hosts, with customizable policies for
provisioning host resources to virtual machines and energy-aware computational
resources
 It is a self-contained platform for modelling cloud‟s service brokers, provisioning,
and allocation policies.
 It supports the simulation of network connections among simulated system elements.
 Support for simulation of federated cloud environment, that inter-networks resources
from both private and public domains.
 Availability of a virtualization engine that aids in the creation and management of
multiple independent and co-hosted virtual services on a data centre node.
 Flexibility to switch between space shared and time-shared allocation of processing
cores to virtualized services.

import java.text.DecimalFormat;
import java.util.Calendar;
import java.util.List;

import org.cloudbus.cloudsim.Cloudlet;
import org.cloudbus.cloudsim.Datacenter;
import org.cloudbus.cloudsim.Log;
26
import org.cloudbus.cloudsim.Vm;
import org.cloudbus.cloudsim.core.CloudSim;

/**
* FCFS Task scheduling
* @author Linda J
*/
public class FCFS {

/** The cloudlet list. */


private static List<Cloudlet> cloudletList;

/** The vmlist. */


private static List<Vm> vmlist;

private static int reqTasks = 5;


private static int reqVms = 2;

/**
* Creates main() to run this
example */
public static void main(String[] args) {

Log.printLine("Starting FCFS...");

try {
// First step: Initialize the CloudSim package. It should be
called // before creating any entities.
int num_user = 1; // number of cloud users
Calendar calendar = Calendar.getInstance();
boolean trace_flag = false; // mean trace events

// Initialize the CloudSim library


CloudSim.init(num_user, calendar, trace_flag);

27
// Second step: Create Datacenters
//Datacenters are the resource providers in CloudSim. We need at list one
of them to run a CloudSim simulation
@SuppressWarnings("unused")
Datacenter datacenter0 = createDatacenter("Datacenter_0");

//Third step: Create Broker


FcfsBroker broker = createBroker();
int brokerId = broker.getId();

//Fourth step: Create one virtual machine


vmlist = new VmsCreator().createRequiredVms(reqVms, brokerId);

//submit vm list to the broker


broker.submitVmList(vmlist);

//Fifth step: Create two Cloudlets


cloudletList = new CloudletCreator().createUserCloudlet(reqTasks, brokerId);

//submit cloudlet list to the broker


broker.submitCloudletList(cloudletList);
//call the scheduling function via the broker
broker.scheduleTaskstoVms();

CloudSim.startSimulation();

// Final step: Print results when simulation is over


List<Cloudlet> newList = broker.getCloudletReceivedList();

CloudSim.stopSimulation();
// Sixth step: Starts the simulation

printCloudletList(newList);

28
Log.printLine("FCFS finished!");
}
catch (Exception e) {
e.printStackTrace();
Log.printLine("The simulation has been terminated due to an unexpected error");
}
}

private static Datacenter createDatacenter(String name){


Datacenter datacenter=new
DataCenterCreator().createUserDatacenter(name, reqVms);

return datacenter;

//We strongly encourage users to develop their own broker policies, to submit vms and
cloudlets according
//to the specific rules of the simulated scenario
private static FcfsBroker createBroker(){

FcfsBroker broker = null;


try {
broker = new FcfsBroker("Broker");
} catch (Exception e) {
e.printStackTrace();
return null;
}
return broker;
}

/**
* Prints the Cloudlet objects
* @param list list of Cloudlets
*/

29
private static void printCloudletList(List<Cloudlet> list)
{ int size = list.size();
Cloudlet cloudlet;

String indent = " ";


Log.printLine();
Log.printLine("========== OUTPUT ==========");
Log.printLine("Cloudlet ID" + indent + "STATUS" + indent +
"Data center ID" + indent + "VM ID" + indent + "Time" + indent + "Start Time"
+ indent + "Finish Time");

DecimalFormat dft = new DecimalFormat("###.##");


for (int i = 0; i < size; i++) {
cloudlet = list.get(i);
Log.print(indent + cloudlet.getCloudletId() + indent + indent);

if (cloudlet.getCloudletStatus() == Cloudlet.SUCCESS){
Log.print("SUCCESS");

Log.printLine( indent + indent + cloudlet.getResourceId() + indent + indent +


indent + cloudlet.getVmId() +
indent + indent + dft.format(cloudlet.getActualCPUTime()) + indent + indent
+ dft.format(cloudlet.getExecStartTime())+
indent + indent + dft.format(cloudlet.getFinishTime()));
}
}

}
}

CONCLUSION:
Thus simulating a cloud scenario using cloudsim is simulated successfully.

30
EX NO: 6 PROCEDURE TO TRANSFER THE FILES FROM ONE
VIRTUAL MACHINE TO ANOTHER VIRTUAL MACHINE
DATE:

Aim:
To write a procedure to transfer the files from one Virtual Machine to another
Virtual Machine.

PROCEDURE:

STEP:1 Select the VM and click File->Export Appliance

STEP:2 Select the VM to be exported and click NEXT.

STEP:3 Note the file path and click “Next”

STEP:4 Click “Export” ,The Virtual machine is being exported.

STEP:5 Install “ssh” to access the neighbour's VM.

31
STEP:6 Go to File->Computer:/home/sam/Documents/

STEP:7 Type the neighbour's URL: sftp://[email protected]._/

STEP:8 Give the password and get connected.

STEP:9 Select the VM and copy it in desktop.

STEP:10 Open VirtualBox and select File->Import Appliance->Browse

32
STEP:11 Select the VM to be imported and click “Open”.

STEP:12 Click “Next”

STEP:13 Click “Import”.

STEP:14 VM is being imported.

33
STEP:15 VM is imported.

RESULT:

Thus the files from one Virtual Machine to another Virtual Machine is
transferred Successfully..

34
EX NO: 7 PROCEDURE TO INSTALL HADOOP SINGLE NODE CLUSTER
AND RUN SIMPLE APPLICATION LIKE WORD COUNT
DATE:

AIM:

To find procedure to set up the SINGLE node Hadoop Cluster.

PROCEDURE:

sam@sysc40:~$ sudo apt-get update

sam@sysc40:~$ sudo apt-get install default-jdk

sam@sysc40:~$ java -version

openjdk version "1.8.0_131"

OpenJDK Runtime Environment (build 1.8.0_131-8u131-b11-0ubuntu1.16.04.2-b11)

OpenJDK 64-Bit Server VM (build 25.131-b11, mixed mode)

sam@sysc40:~$ sudo addgroup hadoop

Adding group `hadoop' (GID 1002) ...

Done.

sam@sysc40:~$ sudo adduser --ingroup hadoop hduser

Adding user `hduser' ...

Adding new user `hduser' (1002) with group `hadoop' ...

Creating home directory `/home/hduser' ...

Copying files from `/etc/skel' ...

Enter new UNIX password: \\Note: Enter any password and remember that, this is only
for unix(applicable for hduser)

Retype new UNIX password:

passwd: password updated successfully

Changing the user information for hduser

Enter the new value, or press ENTER for the default \\Note: Just enter your name and then

35
click enter button for remaining

Full Name []:

Room Number []:

Work Phone []:

Home Phone []:

Other []:

Is the information correct? [Y/n] y

sam@sysc40:~$ groups hduser

hduser : hadoop

sam@sysc40:~$ sudo apt-get install ssh

Reading package lists... Done

Building dependency tree

Reading state information... Done

The following NEW packages will be installed: ssh

0 upgraded, 1 newly installed, 0 to remove and 139 not upgraded.

Need to get 7,076 B of archives.

After this operation, 99.3 kB of additional disk space will be used.

Get:1 http://in.archive.ubuntu.com/ubuntu xenial-updates/main amd64 ssh all


1:7.2p2-4ubuntu2.2 [7,076 B]

Fetched 7,076 B in 0s (16.2 kB/s)

Selecting previously unselected package ssh.

(Reading database ... 233704 files and directories currently installed.)

Preparing to unpack .../ssh_1%3a7.2p2-4ubuntu2.2_all.deb ...

Unpacking ssh (1:7.2p2-4ubuntu2.2) ...

Setting up ssh (1:7.2p2-4ubuntu2.2) ...

36
sam@sysc40:~$ which ssh

/usr/bin/ssh

sam@sysc40:~$ which sshd

/usr/sbin/sshd

sam@sysc40:~$ su hduser

Password: \\Note: Enter the password that we have given above for hduser

hduser@sysc40:/home/sam$

hduser@sysc40:/home/sam$ cd

hduser@sysc40:~$ ssh-keygen -t rsa -P ""

Generating public/private rsa key pair.

Enter file in which to save the key (/home/hduser/.ssh/id_rsa): \\Note: Just click Enter button

Your identification has been saved in /home/hduser/.ssh/id_rsa.

Your public key has been saved in /home/hduser/.ssh/id_rsa.pub.

The key fingerprint is:

SHA256:QWYjqMI0g/ElhpXhVvgVITSn4O4HWS98MDqCX7Gsf/g hduser@sysc40

The key's randomart image is:

+--- [RSA 2048] --- +

|o+*=*.=o= |

|oOo=.=.= . |

|o Bo*. . |

|o+.*.* . |

|o.* * o S |

|+=o |

| + .. |

| o. . |

| .oE |

37
+---- [SHA256] ------ +

hduser@sysc40:~$ cat $HOME/.ssh/id_rsa.pub >>

$HOME/.ssh/authorized_keys hduser@sysc40:~$ ssh localhost

The authenticity of host 'localhost (127.0.0.1)' can't be established.

ECDSA key fingerprint is


SHA256:+kILEX2sGtgsoPfCQ+Vw2cWHbbWGJt0qTEMu9tEvaX8.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.

Welcome to Ubuntu 16.04.2 LTS (GNU/Linux 4.8.0-58-generic x86_64)

* Documentation: https://help.ubuntu.com

* Management: https://landscape.canonical.com

* Support:https://ubuntu.com/advantage

143 packages can be updated.

15 updates are security updates.

The programs included with the Ubuntu system are free software;

the exact distribution terms for each program are described in the

individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by

applicable law.

hduser@sysc40:~$ wget http://mirrors.sonic.net/apache/hadoop/common/hadoop-


2.6.5/hadoop-2.6.5.tar.gz

--2017-07-21 11:17:53-- http://mirrors.sonic.net/apache/hadoop/common/hadoop-


2.6.5/hadoop-2.6.5.tar.gz

Resolving mirrors.sonic.net (mirrors.sonic.net)... 69.12.162.27

Connecting to mirrors.sonic.net (mirrors.sonic.net)|69.12.162.27|:80...

connected. HTTP request sent, awaiting response... 200 OK Length: 199635269

(190M) [application/x-gzip]

38
Saving to: „hadoop-2.6.5.tar.gz.2‟

hadoop-2.6.5.tar.gz 100%[===================>] 190.39M 180KB/s in 17m 4s

2017-07-21 11:34:58 (190 KB/s) - „hadoop-2.6.5.tar.gz.2‟ saved [199635269/199635269]

hduser@sysc40:~$ tar xvzf hadoop-2.6.5.tar.gz hadoop-2.6.5/

hadoop-2.6.5/include/

hadoop-2.6.5/include/hdfs.h

hadoop-2.6.5/include/Pipes.hh

hduser@sysc40:~/hadoop-2.6.5$ su sam

Password: sam123

sam@sysc40:/home/hduser/hadoop-2.6.5$ sudo adduser hduser sudo

[sudo] password for sam:

Adding user `hduser' to group `sudo' ...

Adding user hduser to group sudo

Done.

sam@sysc40:/home/hduser/hadoop-2.6.5$ su hduser

Password: \\Note: Enter the password that we have given above for hduser

hduser@sysc40:~/hadoop-2.6.5$ sudo mkdir /usr/local/hadoop

hduser@sysc40:~/hadoop-2.6.5$ sudo mv * /usr/local/hadoop

hduser@sysc40:~/hadoop-2.6.5$ sudo chown -R hduser:hadoop

/usr/local/hadoop hduser@sysc40:~/hadoop-2.6.5$ cd

hduser@sysc40:~$ update-alternatives --config java

There is only one alternative in link group java (providing /usr/bin/java): /usr/lib/jvm/java-8-
openjdk-amd64/jre/bin/java

39
Nothing to configure.

hduser@sysc40:~$ nano ~/.bashrc

Add the below content at the end of the file and save it

#HADOOP VARIABLES START

export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-

amd64 export HADOOP_INSTALL=/usr/local/hadoop

export PATH=$PATH:$HADOOP_INSTALL/bin

export PATH=$PATH:$HADOOP_INSTALL/sbin

export HADOOP_MAPRED_HOME=$HADOOP_INSTALL

export HADOOP_COMMON_HOME=$HADOOP_INSTALL

export HADOOP_HDFS_HOME=$HADOOP_INSTALL

export YARN_HOME=$HADOOP_INSTALL

export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native

export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib" #HADOOP

VARIABLES END

hduser@sysc40:~$ source ~/.bashrc

hduser@sysc40:~$ javac -version

javac 1.8.0_131

hduser@sysc40:~$ which javac

/usr/bin/javac

hduser@sysc40:~$ readlink -f /usr/bin/javac

/usr/lib/jvm/java-8-openjdk-amd64/bin/javac

hduser@sysc40:~$ nano /usr/local/hadoop/etc/hadoop/hadoop-

env.sh Add the below line at the end of the file

40
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64

hduser@sysc40:~$ sudo mkdir -p /app/hadoop/tmp

hduser@sysc40:~$ sudo chown hduser:hadoop /app/hadoop/tmp

hduser@sysc40:~$ nano /usr/local/hadoop/etc/hadoop/core-site.xml

Add the below line inside the <configuration></configuration> tag.

<configuration>

<property>

<name>hadoop.tmp.dir</name>

<value>/app/hadoop/tmp</value>

<description>A base for other temporary directories.</description>

</property>

<property>

<name>fs.default.name</name>

<value>hdfs://localhost:54310</value>

<description>The name of the default file system. A URI whose

scheme and authority determine the FileSystem implementation. The

uri's scheme determines the config property (fs.SCHEME.impl) naming

the FileSystem implementation class. The uri's authority is used to

determine the host, port, etc. for a filesystem.</description>

</property>

</configuration>

hduser@sysc40:~$ cp /usr/local/hadoop/etc/hadoop/mapred-
site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml

hduser@sysc40:~$ nano /usr/local/hadoop/etc/hadoop/mapred-site.xml

41
Add the below line inside the <configuration></configuration> tag.

<configuration> <property>

<name>mapred.job.tracker</name>

<value>localhost:54311</value>

<description>The host and port that the MapReduce job tracker

runs at. If "local", then jobs are run in-process as a single map and

reduce task.

</description>

</property>

</configuration>

hduser@sysc40:~$ sudo mkdir -p /usr/local/hadoop_store/hdfs/namenode

hduser@sysc40:~$ sudo mkdir -p /usr/local/hadoop_store/hdfs/datanode

hduser@sysc40:~$ sudo chown -R hduser:hadoop /usr/local/hadoop_store

hduser@sysc40:~$ nano /usr/local/hadoop/etc/hadoop/hdfs-site.xml

<configuration>

<property>

<name>dfs.replication</name>

<value>1</value>

<description>Default block replication.

The actual number of replications can be specified when the file is created.

The default is used if replication is not specified in create time.

</description>

</property>

<property>

<name>dfs.namenode.name.dir</name>

<value>file:/usr/local/hadoop_store/hdfs/namenode</value>

</property>

42
<property>

<name>dfs.datanode.data.dir</name>

<value>file:/usr/local/hadoop_store/hdfs/datanode</value>

</property>

</configuration>

hduser@sysc40:~$ hadoop namenode -format

DEPRECATED: Use of this script to execute hdfs command is deprecated.

Instead use the hdfs command for it.

16/11/10 13:07:15 INFO namenode.NameNode: STARTUP_MSG:

/************************************************************

STARTUP_MSG: Starting NameNode STARTUP_MSG: host =

laptop/127.0.1.1

STARTUP_MSG: args = [-format]

STARTUP_MSG: version = 2.6.5

...

...

...

16/11/10 13:07:23 INFO util.ExitUtil: Exiting with status 0

16/11/10 13:07:23 INFO namenode.NameNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at laptop/127.0.1.1

************************************************************/

Starting Hadoop

Now it's time to start the newly installed single node cluster.

43
We can use start-all.sh or (start-dfs.sh and start-yarn.sh)

hduser@sysc40:~$ su sam

Password: sam123

sam@sysc40:/home/hduser$ cd

sam@sysc40:~$ cd /usr/local/hadoop/sbin

sam@sysc40:/usr/local/hadoop/sbin$ ls

distribute-exclude.sh start-all.cmd stop-balancer.sh

hadoop-daemon.sh start-all.sh stop-dfs.cmd

hadoop-daemons.sh start-balancer.sh stop-dfs.sh

hdfs-config.cmd start-dfs.cmd stop-secure-dns.sh

hdfs-config.sh start-dfs.sh stop-yarn.cmd

httpfs.sh start-secure-dns.sh stop-yarn.sh

kms.sh start-yarn.cmd yarn-daemon.sh

mr-jobhistory-daemon.sh start-yarn.sh yarn-daemons.sh

refresh-namenodes.sh stop-all.cmd

slaves.sh stop-all.sh

sam@sysc40:/usr/local/hadoop/sbin$ sudo su hduser

[sudo] password for sam: sam123

Start NameNode daemon and DataNode daemon:

hduser@sysc40:/usr/local/hadoop/sbin$ start-dfs.sh

16/11/10 14:51:44 WARN util.NativeCodeLoader: Unable to load native-hadoop library for


your platform... using builtin-java classes where applicable

Starting namenodes on [localhost]

localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-


laptop.out

localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-


laptop.out

Starting secondary namenodes [0.0.0.0]


44
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.

ECDSA key fingerprint is


SHA256:e9SM2INFNu8NhXKzdX9bOyKIKbMoUSK4dXKonloN7JY.

Are you sure you want to continue connecting (yes/no)? yes

0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.

0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-


secondarynamenode-laptop.out

16/11/10 14:52:24 WARN util.NativeCodeLoader: Unable to load native-hadoop library f

hduser@sysc40:/usr/local/hadoop/sbin$ start-yarn.sh

starting yarn daemons

starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-


resourcemanager-laptop.out

localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-


nodemanager-laptop.out

hduser@sysc70:/usr/local/hadoop/sbin$ jps

14306 DataNode14660 ResourceManager

14505 SecondaryNameNode

14205 NameNode

14765 NodeManager

15166 Jps

hduser@laptop:/usr/local/hadoop/sbin$ stop-dfs.sh

16/11/10 15:23:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for


your platform... using builtin-java classes where applicable

Stopping namenodes on [localhost]

localhost: stopping namenode


45
localhost: stopping datanode

Stopping secondary namenodes [0.0.0.0]

0.0.0.0: stopping secondarynamenode

16/11/10 15:23:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for


your platform... using builtin-java classes where applicable

hduser@laptop:/usr/local/hadoop/sbin$ stop-yarn.sh

stopping yarn daemons

stopping resourcemanager

localhost: stopping nodemanager

no proxyserver to stop

Hadoop Web Interfaces

hduser@laptop:/usr/local/hadoop/sbin$ start-dfs.sh

hduser@laptop:/usr/local/hadoop/sbin$ start-yarn.sh

Type http://localhost:50070/ into our browser, then we'll see the web UI of the
NameNode daemon: In the Overview tab, you can see the Overview, Summary,
NameNode Journal Status and the NameNode Storage informations.

46
Type in http://localhost:50090/status.jsp as url, we get
SecondaryNameNode:

The default port number to access all the applications of cluster is 8088. Use the following url
to visit Resource Manager: http://localhost:8088/

We need to click the Nodes option in the left Cluster panel, then it will sho the node that
we have created.

47
RESULT:

Thus Hadoop one node cluster has been created.

48
EX NO: 8. CREATING AND EXECUTING YOUR FIRST CONTAINER
USING DOCKER.
DATE:

AIM:
To create and execute your first container using Docker.

PROCEDURE

To follow the instructions specific to your operating system. Here are the general steps for the most
popular operating systems:

Windows:

a. Visit the Docker website (https://www.docker.com/) and navigate to the "Get Docker" section.

b. Click on the "Download for Windows" button to download the Docker Desktop installer.

c. Run the installer and follow the prompts. During the installation, Docker may require you to enable
Hyper-V and Containers features, so make sure to enable them if prompted.

d. Once the installation is complete, Docker Desktop will be installed on your Windows machine. You
can access it from the Start menu or the system tray.

Mac:

a. Visit the Docker website (https://www.docker.com/) and navigate to the "Get Docker" section.

b. Click on the "Download for Mac" button to download the Docker Desktop installer.

c. Run the installer and drag the Docker icon to the Applications folder to install Docker Desktop.

d. Launch Docker Desktop from the Applications folder or the Launchpad. It will appear in the status
bar at the top of your screen.

Linux:

Docker supports various Linux distributions. The exact installation steps may vary based on your
distribution. Here's a general outline:

a. Visit the Docker website (https://www.docker.com/) and navigate to the "Get Docker" section.

b. Click on the "Download for Linux" button.

c. Docker provides installation instructions for various Linux distributions such as Ubuntu, CentOS,
Debian, Fedora, and more. Follow the instructions specific to your distribution.

d. Once Docker is installed, start the Docker service using the appropriate command for your Linux
distribution.

After completing the installation, you can open a terminal or command prompt and run the docker --
version command to verify that Docker is installed correctly. It should display the version of Docker
installed on your system.

49
That's it! You now have Docker installed on your machine and can start using it to manage containers.

Install Docker: First, you need to install Docker on your machine. Docker provides platform-specific
installation instructions on their website for different operating systems. Follow the instructions to install
Docker for your particular OS.

Docker Image: Docker containers are created based on Docker images. An image is a lightweight,
standalone, and executable package that includes everything needed to run a piece of software, including
the code, runtime, libraries, and system tools. Docker Hub (hub.docker.com) is a popular online
repository of Docker images. You can search for existing images on Docker Hub or create your own. For
this example, we'll use an existing image.

Pull an Image: Open your terminal or command prompt and execute the following command to pull an
existing Docker image from Docker Hub. We'll use the official hello-world image as an example:

Copy code
docker pull hello-world
Docker will download the image from the Docker Hub repository.

Run a Container: Once you have the Docker image, you can create and run a container based on that
image. Execute the following command to run the hello-world container:

arduino Copy code


docker run hello-world
Docker will create a container from the image and execute it. The container will print a "Hello from
Docker!" message along with some information about your Docker installation.

Note: If you haven't pulled the hello-world image in the previous step, Docker will automatically
download it before running the container.

Congratulations! You've created and executed your first Docker container. Docker will handle the
container lifecycle, including starting, stopping, and managing resources for you. This simple example
demonstrates the basic concept of running a container using Docker.

You can explore further by trying out different Docker images and running more complex applications
within containers.

Step 1: Verify Docker Installation

Before you begin, make sure Docker is installed and running correctly. Open a terminal (or PowerShell
on Windows) and run the following command:

You should see the Docker version information if it's installed correctly.

50
Step 2: Pull a Docker Image

Docker containers are created from Docker images. You can think of an image as a blueprint for a
container. To get started, let's pull a simple image. Open your terminal and run the following command:

The output should contain a message that indicates the container is running, and it will explain how
Docker works.

Step 4: List Running Containers

To check which containers are currently running, you can use the following command:

RESULT:
Thus the first container using docker is created and executed successfully.

51
EX NO: 9 RUN A CONTAINER FROM DOCKER HUB
DATE:

AIM:
To run a container from docker hub

PROCEDURE:

Run a Container from Docker Hub


To run a container from Docker Hub, you need to follow these steps:

Search for an Image: Visit the Docker Hub website (https://hub.docker.com/) and use the search bar to
find the image you want to run. You can search for popular images like nginx, mysql, redis, etc., or
specific images based on your requirements.

Pull the Image: Once you've found the desired image, open a terminal or command prompt and execute
the following command to pull the image from Docker Hub:

php
Copy code
docker pull <image_name>
Replace <image_name> with the name of the image you want to pull. For example, if you want to pull
the nginx image, you would use:

Copy code docker pull nginx


Docker will download the image and store it on your local machine.

Run the Container: After pulling the image, you can create and run a container based on that image. Use
the following command:

arduino Copy code


docker run <image_name>
Replace <image_name> with the name of the image you pulled. For example:

arduino Copy code


docker run nginx
Docker will create a container from the image and start it. The container will run the default command
specified in the image, such as starting a web server, database, or any other application.

Note: By default, Docker will allocate a random port on your host machine and map it to the container's
exposed ports. If you want to specify a specific port mapping, you can use the -p option. For example:

arduino Copy code


docker run -p 8080:80 nginx
This will map port 8080 on your host machine to port 80 inside the container.

Interact with the Container: Once the container is running, you can interact with it as needed. For
example, if you ran an nginx container, you can access it in your web browser by visiting http://localhost
or http://<your_host_ip> (if you specified a port mapping).

52
To stop the container, you can use the docker stop command followed by the container ID or name:

arduino Copy code


docker stop <container_id_or_name>
You can list the running containers using the docker ps command and stop them as necessary.

(or)

Flow-1: Pull Docker Image from Docker Hub and Run it Step-1: Verify Docker version and also login
to Docker Hub

docker version docker login


Step-2: Pull Image from Docker Hub

docker pull stacksimplify/dockerintro-springboot-helloworld-rest-api:1.0.0-RELEASE Step-3: Run the


downloaded Docker Image & Access the Application
Copy the docker image name from Docker Hub

docker run --name app1 -p 80:8080 -d stacksimplify/dockerintro-springboot-helloworld-rest-api:1.0.0-


RELEASE
http://localhost/hello
Step-4: List Running Containers

docker ps docker ps -a docker ps -a -q


Step-5: Connect to Container Terminal

docker exec -it <container-name> /bin/sh Step-6: Container Stop, Start

docker stop <container-name> docker start <container-name>

OUTPUT:

53
54
RESULT:
Thus the program ran a container from docker hub

55
ADDITIONAL EXERCISES

EX NO: 1 CLIENT SERVER COMMUNICATION BETWEEN TWO


VIRTUAL MACHINE INSTANCES, EXECUTION OF CHAT
DATE: APPLICATION

AIM:
To Create communication between two virtual machines in an virtual environment.

PROCEDURE:

Step 1: Implement two host operating systems onto a single virtual box

Step 2: Then implement a internal networking inbetween them by the following steps

OS-> settings -> Network -> internal network ->intnet

Step 3: Connect the two machines internally

Step 4: Run the virtual machines

Step 5: Open terminal in one VM, give ifconfig command

Step 6: Then ping the Ip of one machine in the other terminal

ping 10.0.2.10

Step 7: Then run the communication between the terminals

RESULT:
Thus the communication between two virtual machines in an virtual environment
was created Successfully.

56
EX NO: 2 STUDY AND IMPLEMENTATION OF STORAGE AS A
DATE: SERVICE

AIM:
To study and implementation of Storage as a Service.

PROCEDURE:

Step 1: Sign into the Google Drive website with your Google account.
If you don‟t have a Google account, you can create one for free. Google Drive will allow you
to store your files in the cloud, as well as create documents and forms through the Google

Drive web interface.

Step 2: Add files to your drive.


There are two ways to add files to your drive. You can create Google Drive documents, or
you can upload files from your computer. To create a new file, click the CREATE button. To
upload a file, click the “Up Arrow” button next to the CREATE button.

57
Step 3: Change the way your files are displayed.
You can choose to display files by large icons (Grid) or as a list (List). The List mode will
show you at a glance the owner of the document and when it was last modified. The Grid
mode will show each file as a preview of its first page. You can change the mode by clicking
the buttons next to the gear icon in the upper right corner of the page. // List Mode

Step 4: Use the navigation bar on the left side to browse your files.
“My Drive” is where all of your uploaded files and folders are stored. “Shared with Me” are
documents and files that have been shared with you by other Drive users. “Starred” files are

58
files that you have marked as important, and “Recent” files are the ones you have most
recently edited.
•You can drag and drop files and folders around your Drive to organize them as you see fit.
•Click the Folder icon with a “+” sign to create a new folder in your Drive. You can create
folders inside of other folders to organize your files.

Step 5: Search for files.


You can search through your Google Drive documents and folders using the search bar at the
top of your page. Google Drive will search through titles, content, and owners. If a file is
found with the exact term in the title, it will appear under the search bar as you type so that
you can quickly select it.

59
Step 1: Click the NEW button.
A menu will appear that allows you to choose what type of document you want to create.

Step 2: Create a new file.


Once you‟ve selected your document type, you will be taken to your blank document. If you
chose Google Docs/Sheets/Slides , you will be greeted by a wizard that will help you
configure the feel of your document.

You have several options by default, and more can be added by clicking the “More “ link at
the bottom of the menu:

Step 3: Name the file.


At the top of the page, click the italic gray text that says “Untitled <file type>”. When you
click it, the “Rename document” window will appear, allowing you to change the name of
your file.

60
Step 4: Edit your document.
Begin writing your document as you would in its commercially-equivalent. You will most
likely find that Google Drive has most of the basic features, but advanced features you may
be used to are not available.
1. Your document saves automatically as you work on it.

Step 5: Export and convert the file.


If you want to make your file compatible with similar programs, click File and place your
cursor over “Download As”. A menu will appear with the available formats. Choose the
format that best suits your needs. You will be asked to name the file and select a download
location. When the file is downloaded, it will be in the format you chose.

61
Step 6: Share your document.
Click File and select Share, or click the blue Share button in the upper right corner to open
the Sharing settings. You can specify who can see the file as well as who can edit it.

62
Other Capabilities:
1. Edit photos
2. Listen Music
3. Do drawings
4. Merge PDFs

CONCLUSION:
Google Docs provide an efficient way for storage of data. It fits well in Storage as a
service (SaaS). It has varied options to create documents, presentations and also spreadsheets.
It saves documents automatically after a few seconds and can be shared anywhere on the
Internet at the click of a button.

63
EX NO: 3 STUDY OF AMAZON WEB SERVICES
DATE:

AIM:

To Study of Amazon Web Services

PROCEDURE:

Security using MFA(Multi Factor Authentication) device code:


Step 1: Goto aws.amazon.com and click on "My Account"
Step 2 : Select "AWS management console" and click on it.

Step 3: Give an Email id in the required field,if you are registering first time then select "I
am a new user" radio button and click on "sign in using our secure server" button. Step 4:
Again go to "My Account",select "AWS management console" and click on it.
Sign in again by entering the user name and valid password ( check "I am returning user and
my password is" radio button)
Step 5: All AWS project can be viewed by you, but you cant make any changes in it or you
cant create new thing as you are not paying any charges to amazon .
Step 6: To create the user in a root user follow the steps mentioned below:
1) Click on "Identity and Access Management" in security and identity project
2) Click in "Users" from dashboard,It will take you to "Create New Users" and
clickoncreate new user button .Enter the "User Name" and click on "Create" button
atright bottom
3) Once the user is created click on it
4) Goto security credentials tab
5) Click on "Create Access Key", it will create an access key for user.
6) Click on "Manage MFA device" it will give you one QR code displayed on the screen.
you need to scan that QR code on your mobile phone using barcode scanner (install it
in mobile phone) you also need to install "Google Authenticator" in your mobile
phone to generate the MFA code

7) Google authenticator will keep on generating a new MFA code after every 60 seconds
that code you will have to enter while logging as a user.Hence, the security
ismaintained by MFA device code...
one cannot use your AWS account even if it may have your user name and
password, because MFA code is on your MFA device (mobiel phone in this case)
and it is getting changed after every 60 seconds.
64
Step 7: Permissions in user account:
After creating the user by following above mentioned steps; you can give certain
permissions to specific user
1) Click on created user
2) Goto "Permissions" tab
3) Click on "Attach Policy"
button

4) Click on apply.

SAMPLE OUTPUT:

Click on "My Account". Select "AWS management console" and click on it. Give Email id inthe
required field

65
Addition of security features

66
Sign in to an AWS account

67
Creation of users

Adding users to group

68
Creating Access key

69
Setting permissions to users

70
CONCLUSION:

We have studied how to secure the cloud and its data. Amazon EWS provides the
best security with its extended facilities and services like MFA device. It also gives you the
ability to add your own permissions and policies for securing data more encrypted.

71

You might also like