CC Unit-4

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 16

UNIT-IV

• Now we will have a look at a real example of how to

manage the life cycle, provision, and migrate a virtual


machine by the help of one of ConVirt (Open-source
framework for the management of open-source
virtualization like Xen and KVM).
• In the cloud context, we shall discuss systems that provide the virtual
machine provisioning and migration services;
• Amazon EC2 is a widely known example for vendors that provide public cloud
services.
• Also, Eucalyptus and Open-Nebula are two complementary and enabling
technologies for open-source cloud tools in building private, public, and hybrid
cloud architecture.
• Eucalyptus is a system for implementing on-premise private and hybrid clouds
using the hardware and software’s infrastructure, which is in place without
modification.
• The current interface to Eucalyptus is compatible with Amazon’s EC2, S3, and
EBS interfaces, but the infrastructure is designed to support multiple client-side
interfaces.
• Eucalyptus is implemented using commonly available Linux tools and basic Web
service’s technologies.
• Eucalyptus adds capabilities, such as end-user customization, self- service
provisioning, and legacy application support to data center’s virtualization’s
features, making the IT customer’s service easier.
• OpenNebula is a virtual infrastructure manager that orchestrates
storage, network, and virtualization technologies to enable the
dynamic placement of multi-tier services on distributed
infrastructures, combining both data center’s resources and remote
cloud’s resources according to allocation’s policies.
• OpenNebula provides internal cloud administration and user’s
interfaces for the full management of the cloud’s platform.
 A Web service that allows users to provision new machines
into Amazon’s virtualized infrastructure in a matter of minutes;
 It reduces the time required to obtain and boot a new server.
 EC2 instance is typically a virtual machine with a certain
amount of RAM, CPU, and storage capacity.
 Once you create your AWS (Amazon Web service) account,
you can use the on-line AWS console, or simply download the
offline command line’s tools
 Amazon EC2 provides its customers with three flexible
purchasing models to make it easy for the cost optimization:
 On-Demand instances, Reserved instances, Spot
instances,
Eucalyptus, Open Nebula, and Aneka.
Some of the Eucalyptus (elastic utility computing
architecture for linking your programs to useful systems.)
features are:
Interface compatibility with EC2,andS3 (both Web
service and Query/REST interfaces).
Simple installation and deployment.
Support for most Linux distributions (source and binary
packages).
Support for running VMs that run atop the Xen hypervisor or
KVM.
Support for other kinds of VMs, such as
VMware, is targeted for future releases.
Secure internal communication using SOAP with WS
security.
Cloud administrator’s tool for system’s management and
user’s accounting.
The ability to configure multiple clusters each with private
internal network
 addresses into a single cloud.
 Eucalyptus aims atfostering the research in
modelsfor service’s provisioning, scheduling, SLA
formulation, and hypervisors’ portability.
• Eucalyptus architecture, constitutes
each high- level system’s component
as a stand-alone Web service with the
following high-level components.
• OpenNebula is an open and flexible tool that fits into existing data center’s
environments to build any type of cloud deployment.
• OpenNebula can be primarily used as a virtualization tool to manage your virtual
infrastructure, which is usually referred to as private cloud.
• OpenNebula supports a hybrid cloud to combine local infrastructure with public
cloud-based infrastructure, enabling highly scalable hosting environments.
• OpenNebula also supports public clouds by providing cloud’s interfaces to
expose its functionality for virtual machine, storage, and network management.
• OpenNebula is an open-source alternative to these commercial tools for the
dynamic management of VMs on distributed resources. This tool is supporting
several research lines in advance reservation of capacity, probabilistic admission
control, placement optimization, resource models for the efficient management of
groups of virtual machines, elasticity support, and so on.
• Haizea is an open-source virtual machine-based lease management architecture
developed by Sotomayor et al. It can be used as a scheduling backend for
OpenNebula. Haizea uses leases as a fundamental resource provisioning
• Haizea is an open-source virtual machine-based lease
management
architecture developed by Sotomayor et al.
• It can be used as a scheduling backend for OpenNebula. Haizea uses
leases as a fundamental resource provisioning abstraction and implements
those leases as virtual machines, taking into account the overhead of
using virtual machines when scheduling leases.
• Haizea also provides advanced functionality such as:
 Advance reservation of capacity.
 Best-effort scheduling with backfilling.
 Resource preemption (using VM suspend/resume/migrate).
 Policy engine, allowing developers to write pluggable scheduling policies
in Python.
Virtual machine provision and migration services take their place in research
to achieve the best out of its objectives, and here is a list of potential areas’
candidates for research:
•Self-* (Self-adaptive) and dynamic data center: Data centers exist in the
premises of any hosting or ISPs that host different Web sites and
applications. These sites are being accessed at different timing pattern
(morning hours, afternoon, etc.). Thus, workloads against these sites need to
be tracked because they vary dynamically over time. The sizing of host
machines (the number of virtual machines that host these applications)
represents a challenge, and there is a potential research area over here to
study the performance impact and overhead due to this dynamic creation of
virtual machines hosted in these self-adaptive data centers, in order to
manage Web sites properly.
•Study of the performance in this dynamic environment will also tackle
the balance that should be exist between a rapid response time of individual
applications, the overall performance of the data, and the high availability of
the applications and its services.
• Performance evaluation and workload characterization of virtual
workloads: It is very invaluable in any virtualized infrastructure to have
• a notion about the workload in each VM,
• the performance’s impacts due to the hypervisors layer, and
• the overhead due to consolidated workloads for such systems.

Single-workload benchmark is useful in quantifying the virtualization


overhead within a single VM, but not useful in a whole virtualized
environment with multiple isolated VMs with varying workloads. So, there
is a big need for a common workload model and methodology for
virtualized systems.
• One of the potential areas is the development of fundamental tools and
techniques that facilitate the integration and provisioning of distributed and
hybrid clouds in federated way.
• High-performance data scaling in private and public cloud environments.
• Organizations and enterprises that adopt the
computing architectures can face lots of cloud
related to: challenge
• (a) the elastic provisioning of compute clouds s on
their existing data center’s infrastructure,
• (b) the inability of the data layer to scale at the same rate
as the compute layer.
So, there is a persisting need for implementing systems
that are capable of scaling data with the same pace as
scaling the infrastructure, or to integrate current
infrastructure elastic provisioning systems with
existing systems that are designed to scale out the
applications and data layers.
• Performance and high availability in clustered VMs through live
migration:
• Clusters are very common in research
centers, enterprises, and accordingly in the cloud. There
are two aspects of great importance:
• high availability,
• high performance service.
This can be achieved through clusters of virtual machines in which
high available applications can be achieved through the live
migration of the virtual machine to different locations in the
cluster or in the cloud.
So, the need exists to (a) study the performance, (b) study the
performance’s improvement opportunities with regard to the
migrations of these virtual machines, and (c) decide to which location
the machine should be migrated.

• VM scheduling algorithms.
• Accelerating VMs live migration time.
• Cloud-wide VM migration and memory de-duplication.
• Normal VM migration is being done within the same physical site
location (campus, data center, lab, etc.). However, migrating virtual
machines between different locations is an invaluable feature to be
added to any virtualization management’s tools.
• For more details on memory status, storage relocation, and so on;
check the patent pending technology about this topic. Considering
such setup can enable faster and longer-distance VM migrations,
cross-site load balancing, power management, and de-duplicating
memory throughout multiple sites. It is a rich area for research.
Live migration security:
Live migration security is a very important area of research, because
several security’s vulnerabilities exist; check reference 38 for an
empirical exploitation of live migration:
• Extend migration algorithm to allow for priorities.
• Cisco initiative UCS (Unified Commuting System) and its role in dynamic
just-in-time provisioning of virtual machines and increase of business
agility.

You might also like