Unit-I CC

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 56

UNIT-1

Understanding Abstraction and


Virtualization
Contents
 Using Virtualization Technologies
 Load Balancing and Virtualization
 Understanding Hypervisors
 Understanding Machine Imaging
Introduction to Cloud Computing

 Cloud Computing is the delivery of computing services


such as servers, storage, databases, networking,
software, analytics, intelligence, and more, over the
Cloud (Internet).
 Cloud Computing provides an alternative to the on-
premises datacentre.
 With on-premises datacentre we have to manage such
as purchasing and installing hardware, virtualization,
installing the operating system, and any other
required applications, setting up the network,
configuring the firewall, and setting up storage for
data.
 Cloud Computing, as a cloud vendor is responsible for the
hardware purchase and maintenance.
 They also provide a wide variety of software and platform as a
service.
 We can take any required services on rent. The cloud
computing services will be charged based on usage.
The cloud environment provides an easily accessible online portal that makes
handy for the user to manage the compute, storage, network, and
application resources.
Some cloud service providers are in the following figure.
Advantages of cloud computing
 Cost: It reduces the huge capital costs of buying hardware and
software.
 Speed: Resources can be accessed in minutes, typically within a few
clicks.
 Scalability: We can increase or decrease the requirement of
resources according to the business requirements.
 Productivity: While using cloud computing, we put less operational
effort. We do not need to apply patching, as well as no need to
maintain hardware and software. So, in this way, the IT team can be
more productive and focus on achieving business goals.
 Reliability: Backup and recovery of data are less expensive and very
fast for business continuity.
 Security: Many cloud vendors offer a broad set of policies,
technologies, and controls that which strengthen our data security.
Types of Cloud Computing
 Public Cloud: The cloud resources that are owned and
operated by a third-party cloud service provider are
termed as public clouds. It delivers computing resources
such as servers, software, and storage over the internet
Examples:
• Amazon Web Services. ...
• Microsoft Azure. ...
• IBM Cloud. ...
• Google Cloud Platform. ...
• Oracle Cloud.
Private Cloud: The cloud computing resources that are exclusively
used inside a single business or organization are termed as a private
cloud. A private cloud may physically be located on the company’s on-
site datacentre or hosted by a third-party service provider.
Examples:
•HPE. Hewlett Packard Enterprise (HPE) has been a leader in the
private cloud computing market for many years. ...
•VMware. VMware offers two types of private cloud solutions. ...
•Dell. Dell EMC offers two private cloud products. ...
•Oracle. ...
•IBM.
Hybrid Cloud: It is the combination of public and private clouds,
which is bounded together by technology that allows data
applications to be shared between them. Hybrid cloud provides
flexibility and more deployment options to the business.
Types of Cloud Services
Three types of cloud services:
IAAS ( Infrastructure as a service)
SAAS (software as a service)
PAAS (Platform as a service)
• Infrastructure as a Service (IaaS): In IaaS, we can rent IT infrastructures
like servers and virtual machines (VMs), storage, networks, operating
systems from a cloud service vendor. We can create VM running Windows
or Linux and install anything we want on it. Using IaaS, we don’t need to
care about the hardware or virtualization software, but other than that,
we do have to manage everything else. Using IaaS, we get maximum
flexibility, but still, we need to put more effort into maintenance.

• Platform as a Service (PaaS): This service provides an on-demand


environment for developing, testing, delivering, and managing software
applications. The developer is responsible for the application, and the
PaaS vendor provides the ability to deploy and run it. Using PaaS, the
flexibility gets reduce, but the management of the environment is taken
care of by the cloud vendors.

• Software as a Service (SaaS): It provides a centrally hosted and managed


software services to the end-users. It delivers software over the internet,
on-demand, and typically on a subscription basis. E.g., Microsoft One
Drive, Dropbox, WordPress, Office 365, and Amazon Kindle. SaaS is used
to minimize the operational cost to the maximum extent.
Understanding Abstraction and Virtualization

 Abstraction makes it possible to encapsulate the physical


implementation so that the technical details may be
concealed from the customers.

 Virtualization makes it possible to create a virtual


representation of anything, which may include computer
resources, a virtual computer hardware platform, or storage
devices.
Virtualization
 Virtualization is a technology that allows creating an abstraction (a
virtual version) of computer resources, such as hardware
architecture, operating system, storage, network, etc. With this
abstraction, for example, a single machine can act like many
machines working independently.

 The usual goal of virtualization is to centralize administrative tasks


while improving scalability and workloads.

 It is not a new concept or technology in computer sciences. Virtual


machine concept was in existence since 1960s when it was first
developed by IBM to provide concurrent, interactive access to a
mainframe computer
• VMM is the primary software behind virtualization environments and
implementations. When installed over a host machine, VMM facilitates
the creation of VMs, each with separate operating systems (OS) and
applications. VMM manages the backend operation of these VMs by
allocating the necessary computing, memory, storage and other
input/output (I/O) resources.
• VMM also provides a centralized interface for managing the entire
operation, status and availability of VMs that are installed over a single
host or spread across different and interconnected hosts.
About API,ABI, ISA
Application Binary Interface (ABI):
•Application Binary Interface works as an interface between the operating
system and application programs in the context of object/binary code.
•ABI handles the followings;
•Calling conventions
•Data type
•How functions arguments are passed
•How functions return values retrieved
•Program libraries
•The binary format of object files
•Exception propagation
•Byte ordering
•Register Use
Application Program Interface(API):
Application Program Interface works as an interface between
the operating system and application programs in the context of
source code.
Instruction Set Architecture(ISA):
•ISA is an instruction set architecture.
•ISA is a visible part of the processor and programmer can look
at the ISA because ISA works as the boundary between the
hardware and software.
•ISA works as an intermediate interface between computer
software and computer hardware.
VIRTUALIZATION SCENARIOS
a) Server Consolidation: To consolidate workloads of multiple under-utilized
machines to fewer machines to save on hardware, management, and
administration of the infrastructure.

b) Application consolidation: A legacy application might require newer hardware


and/or operating systems. Fulfillment of the need of such legacy
applications could be served well by virtualizing the newer hardware and
providing its access to others.

c) Sandboxing: Virtual machines are useful to provide secure, isolated environments


(sandboxes) for running foreign or less-trusted applications.
Virtualization technology can, thus, help build secure computing platforms.

d) Multiple execution environments: Virtualization can be used to create mutiple


execution environments (in all possible ways) and can increase the QoS by
guaranteeing specified amount of resources
e) Virtual hardware: It can provide the hardware one never had, e.g. Virtual
SCSI drives, Virtual ethernet adapters, virtual ethernet switches and hubs,
and so on.

f) Multiple simultaneous OS: It can provide the facility of having multiple


simultaneous operating systems that can run many different kind of
applications.

g) Debugging: It can help debug complicated software such as an operating


system or a device driver by letting the user execute them on an emulated PC
with full software controls.

h) Software Migration: Eases the migration of software and thus helps


mobility.
TRADITIONAL, HYBRID, AND
HOSTED VMS

(a) (b) (c)


MORE VIRTUALIZATION TECHNIQUES

Virtualization techniques can be applied at different layers in a


computer stack: hardware layer (including resources such as the
computer architecture, storage, network, etc.), the operating
system layer and application layer. Examples of virtualization
types are:
1)Emulation (EM)
2)Native Virtualization (NV)
3)Para virtualization (PV)
4)Operating System Level Virtualization (OSLV)
5)Resource Virtualization (RV)
6)Application Virtualization (AV)
EMULATION (EM)
1)A typical computer consists of processors, memory chips, buses, hard
drives, disk controllers, timers, multiple I/O devices, and so on.
2)An emulator tries to execute instructions issued by the guest machine (the
machine that is being emulated) by translating them to a set of native
instructions and then executing them on the available hardware.
3)A program can be run on different platforms, regardless of the processor
architecture or operating system (OS). EM provides flexibility in that the
guest OS may not have to be modified to run on what would otherwise be an
incompatible architecture.
4)The performance penalty involved in EM is significant because each
instruction on the guest system must be translated to the host system.
NATIVE VIRTUALIZATION (NV):
1)In NV, a virtual machine is used to simulate a complete hardware
environment in order to allow the operation of an unmodified operating
system for the same type of CPU to execute in complete isolation within the
Virtual Machine Monitor (VMM or Hypervisor).
2)An important issue with this approach is that some CPU instructions require
additional privileges and may not be executed in user space thus requiring
the VMM to analyze executed code and make it safe on-the-fly.
3)NV could be located as a middle ground between full emulation, and
paravirtualization, and requires no modification of the guest OS to enhance
virtualization capabilities
PARAVIRTUALIZATION (PV)
1)In this technique a modified guest OS is able to speak directly
to the VMM.
2)A successful para virtualized platform may allow the VMM to
be simpler (by relocating execution of critical tasks from the
virtual domain to the host domain), and/or reduce the overall
performance degradation of machine execution inside the
virtual-guest.
3)Para virtualization requires the guest operating system to be
explicitly ported for the para virtualization-API.
4)A conventional OS distribution which is not para virtualization-
aware cannot be run on top of a para virtualizing VMM.
OPERATING SYSTEM LEVEL VIRTUALIZATION (OSLV)

1)A server virtualization method where the kernel of an


operating system allows for multiple isolated user-space
instances, instead of just one.
2)It does provide the ability for user-space applications (that
would be able to run normally on the host OS) to run in isolation
from other software.
3)Most implementations of this method can define resource
management for the isolated instances.
RESOURCE VIRTUALIZATION (RV)
A method in which specific resources of a host system are used
by the Guest OS. These may be software based resources such
as domain names, certificates, etc., or hardware based for
example storage and network virtualization.
1)Storage Virtualization (SV): SV provides a single logical disk
from many different systems that could be connected by a
network. This virtual disk can then be made available to Host or
Guest OS's. Storage systems can provide either block accessed
storage, or file accessed storage.
2)Network Virtualization (NV): It is the process of combining
hardware and software network resources and network
functionality into a single, software-based administrative entity,
a virtual network
APPLICATION VIRTUALIZATION (AV)

1)Refers to software technologies that improve portability,


manageability and compatibility of applications by encapsulating
them from the underlying operating system on which they are
executed.
2)The Java Virtual Machine (JVM), Microsoft .NET CLR are
examples of this type of virtualization.
Key Enablers of virtualization
Virtualization in the Cloud is the key enabler of the first four of
five key attributes of cloud computing:
1) Service-based: A service-based architecture is where clients
are abstracted from service providers through service interfaces.
2) Scalable and elastic: Services can be altered to affect capacity
and performance on demand.
3) Shared services: Resources are pooled in order to create
greater efficiencies.
4)Metered usage: Services are billed on a usage basis.
5)Internet delivery: The services provided by cloud computing
are based on Internet protocols and formats
Load Balancing

Load Balancing means the ability to distribute the workload across multiple
computing resources for an overall performance increase.
It represents the ability to transfer any portion of the processing for a system
request to another independent system that will handle it concurrently. E.g.
Web/Database Server.
Cloud computing provide services with the help of internet. No matter where
you access the service, you are directed to the available resources.
The technology used to distribute service requests to resources is referred to
as load balancing.
Load balancing technique can be implemented in hardware or in software. So
with load balancing reliability is increased by using multiple components
instead of single component.
Load Balancing and Virtualization

1) Optimization Technique
2) Increase Resource utilization
3) Lower Latency
4) Reduce response time
5) Avoid System Overload
6) Maximize throughput
7) Increased Reliability
The different network resources that can be load balanced are
as follows:
1.Storage resources
2.Connections through intelligent switches
3.Processing through computer system assignment
4.Access to application instances
5.Network interfaces and services such as DNS, FTP, and HTTP
6.In Load balancing Scheduling algorithms are used to assign
resources
7.The various scheduling algorithm that are in use are round
robin and weighted round robin fastest response time, least
connections and weighted least connections, and custom
assignments.
It is the responsibility of load Balancer to listen for
service request.
When the service request arises then load balancer
uses scheduling algorithm to assign resources for a
particular request.
Load balancer is like a work load manager.
Load balancer generates a Session ticket for a
particular client so that other request from the same
client can be routed to the same resource.
Understanding Machine Imaging
A machine image is a Compute Engine resource that stores all the
configuration, metadata, permissions, and data from multiple disks of a
virtual machine (VM) instance. You can use a machine image in many system
maintenance, backup and recovery, and instance cloning scenarios.
Machine imaging is a process that is used to provide system portability, and
provision and deploy systems in the cloud through capturing the state of
systems using a system image.
A system image makes a copy or a clone of the entire computer system inside
a single file. The image is made by using a program called system imaging
program and can be used later to restore a system image.
For example Amazon Machine Image (AMI) is a system image that is used in
the cloud computing.
The Amazon Web Services uses AMI to store copies of a virtual
machine. An AMI is a file system image that contains an
operating system, all device drivers, and any applications and
state information that the working virtual machine would have.

The AMI files are encrypted and compressed for security


purpose and stored in Amazon S3 (Simple Storage System)
buckets as a set of 10MB chunks. Machine imaging is mostly run
on virtualization platform due to this it is also called as Virtual
Appliances and running virtual machines are called instances.
Section-2(UNIT-1)
Topics:
Capacity Planning
Defining Baseline and metrics
Network capacity
Capacity Planning
For available resources, capacity planning seeks a heavy
demand.
It determines whether the systems are working properly, used
to measure their performance, determine the usage of patterns
and predict future demand of cloud-capacity.
This also adds an expertise planning for improvement
and optimizes performance.
The goal of capacity planning is to maintain the workload
without improving the efficiency. Tuning of performance
and work optimization is not the major target of capacity
planners.
It measures the maximum amount of task that it can perform.
The capacity planning for cloud technology offers the systems
with more enhanced capabilities including some new challenges
over a purely physical system.
Goals of capacity planners
• Capacity planners try to find the solution to meet future
demands on a system by providing additional capacity to
fulfill those demands.
• Capacity planning & system optimization are two both
different concepts, and you mustn't mix them as one.
Performance & capacity are two different attributes of a
system.
• Cloud 'capacity' measures & concerns about how much
workload a system can hold whereas 'performance' deals
with the rate at which a task get performed.
Capacity planning steps
1) Determine the distinctiveness of the present system.
2) Determine the working load for different resources in the
system such as CPU, RAM, network, etc.
3) Load the system until it gets overloaded; & state what's
requiring to uphold acceptable performance.
4) Predict the future based on older statistical reports & other
factors.
5) Deploy resources to meet the predictions & calculations.
6) Repeat step (i) through (v) as a loop.
Defining Baseline and Metrics
• In business, the current system capacity or workload should
be determine as a measurable quantity over time.
• Many developers create cloud-based applications and Web
sites based on a LAMP solution stack
Linux: the operating system
Apache HTTP server: the web server
MySql : the data base server
PhP(hyper Text Processor): the scripting language
These four technologies are open source software.
Baseline Measurements
Two important overall workload metrics
in this LAMP system.
•Page views or hits on the Web site, as
measured in hits per second.
•Transactions completed on the database
server, as measured by transactions per second
or perhaps by queries per second
Load Testing
• Server administrator checks for servers under load for system
metrics to give capacity planners enough information to do
significant capacity planning. Capacity planners should know
about the increase in load to the system. Load-testing needs
to query the following questions:
• What is the optimum load that the system can support?
• What system blocks the current system & limits the system's
performance?
• Can the configuration be altered in the server to use
capacity?
• How will the server react concerning performance with other
servers having different characteristics?
Network Capacity Planning
• Network capacity planning is the process of
planning a network for utilization, bandwidth,
operations, availability and other network
capacity constraints.
• It is a type of network or IT management
process that assists network administrators in
planning for network infrastructure and
operations in line with current and future
operations.
Network capacity planning is generally done to identify
shortcomings or parameters that can affect the
network’s performance or availability within a
predictable future time, usually in years.
Typically, network capacity planning requires
information about:
•Current network traffic volumes
•Network utilization
•Type of traffic
•Capacity of current infrastructure
This analysis helps network administrators understand the
maximum capability of current resources and the amount of
new resources needed to cater to future requirements.
In addition to technical network infrastructure, network capacity
planning may also include planning for human resources that
will manage and/or monitor the network.

You might also like