Cloud Computing (All Questions)
Cloud Computing (All Questions)
Cloud Computing (All Questions)
What is Cloud?
The term Cloud refers to a Network or Internet. In other words, we can say that Cloud is something,
which is present at remote location. Cloud can provide services over public and private networks, i.e.,
WAN, LAN or VPN.
Applications such as e-mail, web conferencing, customer relationship management (CRM) execute on
cloud.
Cloud Computing refers to manipulating, configuring, and accessing the hardware and software
resources remotely. It offers online data storage, infrastructure, and application.
Cloud computing offers platform independency, as the software is not required to be installed locally on
the PC. Hence, the Cloud Computing is making our business applications mobile and collaborative
Q.2. Roots of Cloud Computing
Internet Technologies:
The first root is Internet Technologies which contains service-oriented architecture (SOA), web 2.0, and
web services.
Internet technologies are widely accessible to the public. People can access the content and run
applications that depend on the network connection.
• Cloud computing relies on the network, centralized storage, and bandwidth. However, the internet
isn’t just a network – it’s also very complex and requires centralized management.
• Service-oriented Architecture or SOA is a self-contained module specially designed for business
functionalities.
• Web Services like XML and HTTP are providing web delivery services by using common
mechanisms. It is a universal concept of web service all over the world.
• WEB 2.0 Services are more convenient for the users, as they do not have to learn more about
coding and concepts to work with it.
Distributed Computing:
The second root is Distributed Computing which contains grids, utility computing, and cluster.
• This means users can access the files to the specific location after processing; they can also send
that file back to the server.
• Thus, it is known as distributed computing of the cloud. It is distributed in a manner so people can
access it anywhere in the world.
• With the help of this root, all the related resources like memory space, processor speed, and
hard drive space are utilized in the best possible manner.
Hardware:
The third root is Hardware from the roots of cloud computing which contains multi-core
chips and virtualization.
When we talk about Hardware for cloud computing, it is usually virtual and people do not need to buy
it.
• In cloud computing virtualization allows users to use resources from multiple virtual machines.
It makes it easier and cheaper for customers to use cloud services.
• Moreover, In the Service Level Agreement (SLA) based cloud computing model, each customer
gets their own virtual machine called Virtual Private Cloud (VPC)
• In short, a single cloud computing platform provides all the requirements of hardware, software, and
operating system.
System Management:
The fourth root of cloud computing (System Management) contains data center automation and
autonomic computing.
System management root handles the operations to improve the productivity and efficiency of the
system.
• To achieve this system management ensures all the employees have easy access to all the necessary
information.
• In an autonomic system, admin work becomes easier as the system is autonomic or self-managing.
Additionally, data analysis and monitoring are handled by the sensors.
• Hence, at this root, human involvement is less and the computing system handles most of the
operations.
• Infrastructure as a Service (IaaS) is a cloud computing service model that provides virtualized
computing resources over the internet, including servers, storage, networking, and other infrastructure
components. With IaaS, organizations can quickly and easily scale their IT infrastructure to meet
changing business needs, without the need for costly on-premises infrastructure.
• In summary, IaaS is a cloud computing service model that provides virtualized computing resources
over the internet, including servers, storage, networking, and other infrastructure components. IaaS
provides several benefits, including scalability, cost savings, flexibility, reliability, and security.
Examples of IaaS providers include AWS, Azure, and Google Cloud Platform.
1. Scalability: IaaS allows organizations to quickly and easily scale their IT infrastructure up or
down to meet changing business needs, without the need for costly on-premises infrastructure.
2. Cost savings: IaaS eliminates the need for organizations to invest in and maintain their own IT
infrastructure, which can result in significant cost savings.
3. Flexibility: IaaS provides a flexible computing environment that can be customized to meet
specific business needs, and can support a wide range of applications and workloads.
4. Reliability: IaaS providers typically offer robust service level agreements (SLAs) that ensure high
availability and reliability.
5. Security: IaaS providers implement robust security measures to protect customer data and
infrastructure, including encryption, access controls, and threat detection.
• Amazon Web Services (AWS): A cloud computing platform that provides a wide range of IaaS
services, including EC2 (virtual machines), S3 (storage), and VPC (networking).
• Microsoft Azure: A cloud computing platform that provides a range of IaaS services, including
VMs, storage, and networking.
• Google Cloud Platform: A cloud computing platform that provides a range of IaaS services,
including Compute Engine (virtual machines), Cloud Storage, and Cloud Networking.
Platform as a Service (PaaS) provides a runtime environment. It allows programmers to easily create, test,
run, and deploy web applications.
• Platform as a Service (PaaS) is a cloud computing service model that provides a platform for
developing, testing, and deploying applications over the internet. PaaS providers offer a range of
services and tools to help developers build and deploy applications quickly and easily, without the
need for underlying infrastructure management.
• PaaS providers typically offer a range of services, such as programming languages, libraries,
frameworks, databases, and other development tools that can be accessed and managed through
a web-based interface or API. Customers can choose the resources they need, and pay only for
what they use, on a pay-as-you-go or subscription basis.
• In summary, PaaS is a cloud computing service model that provides a platform for developing,
testing, and deploying applications over the internet. PaaS provides several benefits, including rapid
application development, reduced costs, scalability, flexibility, and collaboration. Examples of PaaS
providers include Heroku, Google App Engine, and Microsoft Azure App Service.
1. Rapid application development: PaaS provides a platform for developers to quickly and easily
develop and deploy applications, without the need for underlying infrastructure management.
2. Reduced costs: PaaS eliminates the need for organizations to invest in and maintain their own
development infrastructure, which can result in significant cost savings.
3. Scalability: PaaS allows applications to be easily scaled up or down to meet changing business
needs, without the need for additional infrastructure investment.
4. Flexibility: PaaS provides a flexible development environment that can be customized to meet
specific business needs, and can support a wide range of applications and workloads.
5. Collaboration: PaaS provides a collaborative development environment that enables teams to
work together on applications, regardless of their location.
• Heroku: A cloud application platform that provides a range of services and tools for developing,
deploying, and managing applications.
• Google App Engine: A platform for developing and deploying web applications that supports
multiple programming languages and frameworks.
• Microsoft Azure App Service: A cloud-based platform for building, deploying, and scaling web
applications and APIs.
SaaS is also known as "On-Demand Software". It is a software distribution model in which services are
hosted by a cloud service provider. Software as a Service (SaaS) is a cloud computing model that allows
users to access and use software applications over the internet, without the need to install and run the
software on their own computers or devices. In the SaaS model, the software is hosted and maintained by
a third-party provider, who is responsible for managing the underlying infrastructure, security, and software
updates
• These services are available to end-users over the internet so, the end-users do not need to install
any software on their devices to access these services.
• The SaaS layer must be web-based and hence accessible from everywhere and preferably on any
device. The key is to understand that it makes no sense to ask whether a service is cloud or SaaS,
as SaaS is a layer in the cloud stack.
• On the other hand, it is important to understand that cloud is much more than SaaS, due to the other
layers that bundled together makes out the whole cloud stack.
Public cloud:
Public cloud is open to all to store and access information via the Internet using the pay-per-usage method.
In public cloud, computing resources are managed and operated by the Cloud Service Provider (CSP).
• Public cloud is owned at a lower cost than the private and hybrid cloud.
• Public cloud is maintained by the cloud service provider, so do not need to worry about the
maintenance.
• Public cloud is easier to integrate. Hence it offers a better flexibility approach to consumers.
• Public cloud is location independent because its services are delivered through the internet.
• Public cloud is highly scalable as per the requirement of computing resources.
• It is accessible by the general public, so there is no limit to the number of users.
Private cloud:
Private cloud is also known as an internal cloud or corporate cloud. It is used by organizations to build and
manage their own data centers internally or by the third party. It can be deployed using Opensource tools
such as Open stack and Eucalyptus.
• Private cloud provides a high level of security and privacy to the users.
• Private cloud offers better performance with improved speed and space capacity.
• It allows the IT team to quickly allocate and deliver on-demand IT resources.
• The organization has full control over the cloud because it is managed by the organization itself. So,
there is no need for the organization to depends on anybody.
Hybrid cloud:
Hybrid Cloud is a combination of the public cloud and the private cloud. we can say:
• Hybrid cloud is partially secure because the services which are running on the public cloud can be
accessed by anyone, while the services which are running on a private cloud can be accessed only
by the organization's users.
• Example: Google Application Suite (Gmail, Google Apps, and Google Drive), Office 365 (MS Office
on the Web and One Drive), Amazon Web Services.
• Hybrid cloud is suitable for organizations that require more security than the public cloud.
• Hybrid cloud helps you to deliver new products and services more quickly.
• Hybrid cloud provides an excellent way to reduce the risk.
• Hybrid cloud offers flexible resources because of the public cloud and secure resources because of
the private cloud
Community Cloud:
Community cloud allows systems and services to be accessible by a group of several organizations to share
the information between the organization and a specific community. It is owned, managed, and operated
by one or more organizations in the community, a third party, or a combination of them.
• Community cloud is cost-effective because the whole cloud is being shared by several organizations
or communities.
• Community cloud is suitable for organizations that want to have a collaborative cloud with more
security features than the public cloud.
• It provides better security than the public cloud.
• It provides collaborative and distributive environment.
• Community cloud allows us to share cloud resources, infrastructure, and other capabilities among
various organizations.
Resource pooling is one of the essential features of cloud computing. Resource pooling means that a cloud
service provider can share resources among multiple clients, each providing a different set of services
according to their needs. It is a multi-client strategy that can be applied to data storage, processing and
bandwidth-delivered services.
2. On-Demand Self-Service
It is one of the important and essential features of cloud computing. This enables the client to continuously
monitor server uptime, capabilities and allocated network storage. This is a fundamental feature of cloud
computing, and a customer can also control the computing capabilities according to their needs.
3. Easy Maintenance
This is one of the best cloud features. Servers are easily maintained, and downtime is minimal or sometimes
zero. Cloud computing powered resources often undergo several updates to optimize their capabilities and
potential. Updates are more viable with devices and perform faster than previous versions.
A key feature and advantage of cloud computing is its rapid scalability. This cloud feature enables cost-
effective handling of workloads that require a large number of servers but only for a short period. Many
customers have workloads that can be run very cost-effectively due to the rapid scalability of cloud
computing.
5. Economical
This cloud feature helps in reducing the IT expenditure of the organizations. In cloud computing, clients
need to pay the administration for the space used by them. There is no cover-up or additional charges that
need to be paid. Administration is economical, and more often than not, some space is allocated for free.
Reporting Services is one of the many cloud features that make it the best choice for organizations. The
measurement and reporting service is helpful for both cloud providers and their customers. This enables
both the provider and the customer to monitor and report which services have been used and for what
purposes. It helps in monitoring billing and ensuring optimum utilization of resources.
7. Security
Data security is one of the best features of cloud computing. Cloud services make a copy of the stored data
to prevent any kind of data loss. If one server loses data by any chance, the copied version is restored from
the other server. This feature comes in handy when multiple users are working on a particular file in real-
time, and one file suddenly gets corrupted.
8. Automation
Automation is an essential feature of cloud computing. The ability of cloud computing to automatically
install, configure and maintain a cloud service is known as automation in cloud computing. In simple words,
it is the process of making the most of the technology and minimizing the manual effort. However, achieving
automation in a cloud ecosystem is not that easy.
9. Resilience
Resilience in cloud computing means the ability of a service to quickly recover from any disruption. The
resilience of a cloud is measured by how fast its servers, databases and network systems restart and recover
from any loss or damage. Availability is another key feature of cloud computing. Since cloud services can
be accessed remotely, there are no geographic restrictions or limits on the use of cloud resources.
Cloud providers deliver large network access by monitoring and guaranteeing measurements that reflect
how clients access cloud resources and data: latency, access times, data throughput, and more. A big part
of the cloud's characteristics is its ubiquity. The client can access cloud data or transfer data to the cloud
from any location with a device and internet connection. These capabilities are available everywhere in the
organization and are achieved with the help of internet.
Once the data is stored in the cloud, it is easier to get back-up and restore that data using the cloud.
2) Improved collaboration
Cloud applications improve collaboration by allowing groups of people to quickly and easily share
information in the cloud via shared storage.
3) Excellent accessibility
Cloud allows us to quickly and easily access store information anywhere, anytime in the whole world,
using an internet connection. An internet cloud infrastructure increases organization productivity and
efficiency by ensuring that our data is always accessible.
Cloud computing reduces both hardware and software maintenance costs for organizations.
5) Mobility
Cloud computing allows us to easily access all cloud data via mobile. Cloud computing allows users to
access corporate data from any device, anywhere and at any time, using the internet. With information
conveniently available, employees can remain productive even on the go.
6) Services in the pay-per-use model
Cloud computing offers Application Programming Interfaces (APIs) to the users for access services on the
cloud and pays the charges as per the usage of service.
Cloud offers us a huge amount of storing capacity for storing our important data such as documents,
images, audio, video, etc. in one place.
8) Data security
Data security is one of the biggest advantages of cloud computing. Cloud offers many advanced features
related to security and ensures that data is securely stored and handled.
Cloud providers offer backup and disaster recovery features. Storing data in the cloud rather than locally
can help prevent data loss in the event of an emergency, such as hardware malfunction, malicious threats,
or even simple user error.
Performing manual organization-wide software updates can take up a lot of valuable IT staff time.
However, with cloud computing, service providers regularly refresh and update systems with the latest
technology to provide businesses with up-to-date software versions, latest servers and upgraded
processing power.
Cloud infrastructure consists of all of the hardware and software elements needed for cloud computing,
including:
• Compute (server)
• Networking
• Storage
• Virtualization resources
Cloud infrastructure types usually also include a user interface (UI) for managing these virtual resources.
Cloud infrastructure management comprises the processes and tools needed to effectively allocate and
deliver key resources when and where they are required. The UI, or dashboard, is a good example of
such a tool; it acts as a control panel for provisioning, configuring and managing cloud
infrastructure. Cloud infrastructure management is useful in delivering cloud services to both:
• Internal users, such as developers or any other roles that consume cloud resources.
• External users, such as customers and business partners.
While cloud providers often offer their native management controls, they usually only enable control over
their particular platform and services. Third-party cloud management tools typically promise a “360-
degree view” and management capabilities across all environments, which may be necessary in multi-
cloud and hybrid cloud environments.
In either scenario, cloud infrastructure management tools offer some combination of the following
features:
Provisioning and configuration: Developers, systems engineers and other IT professionals use these
tools to set up and configure the hardware and software resources they need. This would include:
This also includes features for enabling and managing self-service provisioning, in which end users use a
dashboard or other mechanisms for standing up their own resources as needed, based on predetermined
rules.
Visibility and monitoring: Cloud infrastructure management tools allow operators to “see” their
environments. More importantly, they include or integrate with monitoring tools that:
Resource allocation: Related to cost optimization, resource allocation features enable granular control
over how users consume cloud infrastructure, including self-service provisioning. This is similar
to budgeting: dividing up shared resources appropriately and in some cases creating criteria for going
over budget.
Cost optimization: Managing costs is a critical capability of cloud infrastructure management tools.
Without this component, enterprises run an increased risk of “sticker shock” when the cloud bill arrives.
Proactively monitoring costs via strategies such as turning off unused or unnecessary resources is key to
maximizing the ROI of cloud infrastructure.
Automation: Cloud infrastructure management tools sometimes offer automation capabilities for
various operational tasks, such as configuration management, auto-provisioning and auto-scaling.
Security: Cloud infrastructure management tools are another part of a holistic cloud security strategy.
They are one mechanism for properly configuring a cloud provider’s native security controls
based on a particular setup and needs.
Q.9 Explain Assessing the role of Open Standards.
Open standards play a critical role in promoting interoperability and compatibility between different
hardware and software systems, enabling them to work together seamlessly. Open standards are
technical specifications that are publicly available and can be used without restrictions, royalties, or fees.
They are developed collaboratively by industry groups, organizations, and standards bodies and are
widely adopted by industry players.
1. Increased Choice: Open standards give customers the freedom to choose the products that work
best with their tools and work in their environment. Constraints around specific interfaces
disappear and decisions can be based upon performance.
2. Reduced Cost: Open standards lower costs by reducing the complexity and number of tools
required to support an environment. Training is also more efficient in this environment.
3. Improved Interoperability: Ultimately, users want to integrate their business systems and the
infrastructures that support them. Open standards enable that integration which drives greater
UNIT 2
Q.1 Explain the cloud computing stack.
In summary, the cloud computing stack provides a range of services that can be used to meet the
computing needs of businesses and individuals. The IaaS layer provides basic infrastructure, the PaaS
layer provides a complete development and deployment environment, and the SaaS layer provides
ready-to-use applications. As you move up the stack, the level of abstraction increases, making it easier
for users to focus on their core business needs rather than the underlying infrastructure.
The cloud computing stack consists of three main layers, namely Infrastructure-as-a-Service (IaaS),
Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS). Let's explore each layer in detail:
1. Infrastructure-as-a-Service (IaaS):
The IaaS layer provides basic computing infrastructure, including virtualized computing resources such
as servers, storage, and networking. This layer enables users to deploy and run their own software,
operating systems, and applications on the provider's infrastructure. Users have complete control over
the infrastructure and can configure it to meet their specific needs. Examples of IaaS providers include
Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform.
2. Platform-as-a-Service (PaaS):
The PaaS layer builds on top of the IaaS layer and provides a complete development and deployment
environment for applications. This layer provides the tools and services required to develop, test, deploy,
and manage applications without having to worry about the underlying infrastructure. Users can focus on
building their applications and rely on the provider to manage the infrastructure. Examples of PaaS
providers include Heroku, Google App Engine, and Microsoft Azure.
3. Software-as-a-Service (SaaS):
The SaaS layer is the top layer of the cloud computing stack and provides complete applications that are
delivered over the internet. These applications are ready to use and require no installation or
configuration on the user's part. The provider manages the entire stack, including the infrastructure,
platform, and software. Users simply access the applications through a web browser or mobile app.
Examples of SaaS providers include Salesforce, Dropbox, and Google Workspace.
Connecting to the cloud in cloud computing typically involves three main steps:
1. Choosing a cloud service provider: The first step is to choose a cloud service provider that meets
your requirements. Popular cloud service providers include Amazon Web Services (AWS), Microsoft
Azure, Google Cloud Platform, and many others.
2. Creating an account: Once you have chosen a cloud service provider, you will need to create an
account. This typically involves providing your personal and payment information. You will also need to
choose a subscription plan that best fits your needs.
3. Accessing the cloud: Once you have created an account, you can access the cloud using a web browser
or a cloud provider-specific tool. The provider may also provide APIs and command-line interfaces that
allow you to manage your cloud resources programmatically.
Connecting to the cloud in cloud computing can be done in several ways, including through a web
browser, command-line interface, API, VPN, Direct Connect, and RDP. The choice of connection method
will depend on your specific needs and requirements.
Connecting to the cloud in cloud computing can be done in several ways. Here are some of the most
common ways to connect to the cloud:
1. Web browser: One of the easiest ways to connect to the cloud is through a web browser. Cloud service
providers typically provide a web-based user interface that allows you to manage your cloud resources.
To access the cloud, you simply need to open a web browser and log in to your cloud account.
2. Command-line interface (CLI): Many cloud service providers offer command-line interfaces (CLI) that
allow you to manage your cloud resources using command-line tools. These tools can be particularly
useful for automating cloud management tasks or for integrating cloud management with other
command-line tools.
3. API: Most cloud service providers also offer application programming interfaces (APIs) that allow you to
programmatically manage your cloud resources. APIs can be particularly useful for integrating cloud
management with other applications or for automating cloud management tasks.
4. Virtual Private Network (VPN): A virtual private network (VPN) can be used to securely connect to
the cloud over the internet. A VPN creates a secure tunnel between your device and the cloud service
provider, which can help protect your data and privacy.
5. Direct Connect: Some cloud service providers offer Direct Connect, which is a dedicated network
connection between your on-premises infrastructure and the cloud. Direct Connect can provide faster
and more reliable network performance compared to connecting over the internet.
6. Remote Desktop Protocol (RDP): Remote Desktop Protocol (RDP) is a protocol that allows you to
connect to a virtual machine in the cloud and control it as if you were sitting in front of it. RDP can be
particularly useful for accessing and managing virtual machines that run in the cloud.
Iaas is also known as Hardware as a Service (HaaS). It is one of the layers of the cloud computing
platform.
• Infrastructure as a Service (IaaS) is a cloud computing service model that provides virtualized
computing resources over the internet, including servers, storage, networking, and other infrastructure
components. With IaaS, organizations can quickly and easily scale their IT infrastructure to meet
changing business needs, without the need for costly on-premises infrastructure.
• In summary, IaaS is a cloud computing service model that provides virtualized computing resources
over the internet, including servers, storage, networking, and other infrastructure components. IaaS
provides several benefits, including scalability, cost savings, flexibility, reliability, and security.
Examples of IaaS providers include AWS, Azure, and Google Cloud Platform.
6. Scalability: IaaS allows organizations to quickly and easily scale their IT infrastructure up or
down to meet changing business needs, without the need for costly on-premises infrastructure.
7. Cost savings: IaaS eliminates the need for organizations to invest in and maintain their own IT
infrastructure, which can result in significant cost savings.
8. Flexibility: IaaS provides a flexible computing environment that can be customized to meet
specific business needs, and can support a wide range of applications and workloads.
9. Reliability: IaaS providers typically offer robust service level agreements (SLAs) that ensure high
availability and reliability.
10.Security: IaaS providers implement robust security measures to protect customer data and
infrastructure, including encryption, access controls, and threat detection.
• Amazon Web Services (AWS): A cloud computing platform that provides a wide range of IaaS
services, including EC2 (virtual machines), S3 (storage), and VPC (networking).
• Microsoft Azure: A cloud computing platform that provides a range of IaaS services, including
VMs, storage, and networking.
• Google Cloud Platform: A cloud computing platform that provides a range of IaaS services,
including Compute Engine (virtual machines), Cloud Storage, and Cloud Networking.
Platform as a Service (PaaS) provides a runtime environment. It allows programmers to easily create, test,
run, and deploy web applications.
• Platform as a Service (PaaS) is a cloud computing service model that provides a platform for
developing, testing, and deploying applications over the internet. PaaS providers offer a range of
services and tools to help developers build and deploy applications quickly and easily, without the
need for underlying infrastructure management.
• PaaS providers typically offer a range of services, such as programming languages, libraries,
frameworks, databases, and other development tools that can be accessed and managed through
a web-based interface or API. Customers can choose the resources they need, and pay only for
what they use, on a pay-as-you-go or subscription basis.
• In summary, PaaS is a cloud computing service model that provides a platform for developing,
testing, and deploying applications over the internet. PaaS provides several benefits, including rapid
application development, reduced costs, scalability, flexibility, and collaboration. Examples of PaaS
providers include Heroku, Google App Engine, and Microsoft Azure App Service.
6. Rapid application development: PaaS provides a platform for developers to quickly and easily
develop and deploy applications, without the need for underlying infrastructure management.
7. Reduced costs: PaaS eliminates the need for organizations to invest in and maintain their own
development infrastructure, which can result in significant cost savings.
8. Scalability: PaaS allows applications to be easily scaled up or down to meet changing business
needs, without the need for additional infrastructure investment.
9. Flexibility: PaaS provides a flexible development environment that can be customized to meet
specific business needs, and can support a wide range of applications and workloads.
10.Collaboration: PaaS provides a collaborative development environment that enables teams to
work together on applications, regardless of their location.
• Heroku: A cloud application platform that provides a range of services and tools for developing,
deploying, and managing applications.
• Google App Engine: A platform for developing and deploying web applications that supports
multiple programming languages and frameworks.
• Microsoft Azure App Service: A cloud-based platform for building, deploying, and scaling web
applications and APIs.
PAAS SAAS
It is a cloud computing model that delivers tools that It is a service model in cloud computing that hosts
are used for the development of applications. software to make it available to clients.
PAAS SAAS
It is popular among developers who focus on the It is popular among consumers and companies, such
development of apps and scripts. as file sharing, email, and networking.
Used by mid-level developers to build applications. Used among the users of entertainment.
Facebook, and Google search engine. MS Office web, Facebook and Google Apps.
It is highly scalable to suit the different businesses It is highly scalable to suit the small, mid and
according to resources. enterprise level business
In summary, using PaaS application frameworks can provide several benefits, including rapid application
development, automatic scaling, built-in services, multi-cloud support, and cost savings. There are many
PaaS frameworks to choose from, and the choice will depend on your specific needs and requirements.
• Heroku: A cloud-based PaaS framework that supports several programming languages, including
Ruby, Java, and Node.js.
• Google App Engine: A PaaS framework that supports several programming languages, including
Java, Python, and Go.
• Microsoft Azure App Service: A PaaS framework that supports several programming languages,
including .NET, Node.js, and Python.
• AWS Elastic Beanstalk: A PaaS framework that supports several programming languages,
including Java, .NET, and Python.
1. Rapid application development: PaaS application frameworks provide pre-built and pre-
configured environments that can accelerate the application development process. This allows
developers to focus on writing application code rather than setting up and configuring
infrastructure.
2. Automatic scaling: Many PaaS frameworks provide automatic scaling capabilities, which can
automatically adjust the computing resources allocated to an application based on its usage
patterns. This can help ensure that applications can handle traffic spikes and maintain
performance without requiring manual intervention.
3. Built-in services: PaaS frameworks often provide a variety of built-in services that can be easily
integrated into applications, such as databases, messaging queues, and authentication services.
This can save developers a lot of time and effort compared to setting up and configuring these
services from scratch.
4. Multi-cloud support: Some PaaS frameworks provide support for multiple cloud providers,
allowing developers to deploy applications to different cloud environments. This can help avoid
vendor lock-in and provide greater flexibility and choice for developers.
5. Cost savings: PaaS frameworks can help reduce costs by providing a pre-built environment for
application development and deployment. This can save developers the time and expense of
setting up and configuring infrastructure, as well as the ongoing costs of maintaining and
managing infrastructure.
SaaS is also known as "On-Demand Software". It is a software distribution model in which services are
hosted by a cloud service provider. Software as a Service (SaaS) is a cloud computing model that allows
users to access and use software applications over the internet, without the need to install and run the
software on their own computers or devices. In the SaaS model, the software is hosted and maintained by
a third-party provider, who is responsible for managing the underlying infrastructure, security, and software
updates
• These services are available to end-users over the internet so, the end-users do not need to install
any software on their devices to access these services.
• The SaaS layer must be web-based and hence accessible from everywhere and preferably on any
device. The key is to understand that it makes no sense to ask whether a service is cloud or SaaS,
as SaaS is a layer in the cloud stack.
• On the other hand, it is important to understand that cloud is much more than SaaS, due to the other
layers that bundled together makes out the whole cloud stack.
b. Identity as a Service
Identity as a Service (IDaaS) is a cloud-based authentication and access management service that allows
users to securely access applications and resources from anywhere, using any device, without the need
for complex on-premises infrastructure.
IDaaS provides a centralized way to manage user identities, access privileges, and authentication policies,
and helps organizations to enforce security policies and compliance regulations. IDaaS providers typically
offer a range of authentication methods, such as single sign-on (SSO), multi-factor authentication (MFA),
and social login, as well as identity and access management (IAM) features, such as user provisioning,
role-based access control (RBAC), and identity governance and administration (IGA).
In summary, IDaaS is a cloud-based authentication and access management service that provides a
centralized way to manage user identities and access privileges, and helps organizations to enforce
security policies and compliance regulations. IDaaS offers several benefits, including enhanced security,
simplified management, improved user experience, flexibility, and compliance.
1. Enhanced security: IDaaS providers implement robust security measures to protect user
identities and prevent unauthorized access, such as encryption, access controls, and threat
detection.
2. Simplified management: IDaaS eliminates the need for organizations to manage their own
authentication and access management infrastructure, which can be complex and time-
consuming.
3. Improved user experience: IDaaS provides a seamless user experience by allowing users to
access multiple applications and resources using a single set of credentials.
4. Flexibility: IDaaS allows organizations to adapt to changing business needs by providing scalable
and customizable authentication and access management services.
5. Compliance: IDaaS helps organizations to comply with regulatory requirements by providing
granular controls over user access, authentication policies, and data protection.
• Okta: A cloud-based identity and access management platform that provides SSO, MFA, and IAM
features.
• Microsoft Azure Active Directory: A cloud-based identity and access management service that
integrates with Microsoft cloud services and third-party applications.
• Ping Identity: A cloud-based identity and access management platform that provides SSO, MFA,
and IAM features.
• OneLogin: A cloud-based identity and access management platform that provides SSO, MFA, and
IAM features.
C. Compliance as a Service.
Compliance as a Service (CaaS) is a cloud-based service that helps organizations to manage compliance
with regulatory requirements, industry standards, and internal policies. CaaS providers offer a range of
compliance-related services, such as risk assessments, audit management, policy management, and
compliance reporting.
CaaS leverages cloud computing to provide a scalable and flexible compliance management solution that
can adapt to changing business needs. CaaS providers typically offer a range of compliance frameworks,
such as HIPAA, GDPR, PCI DSS, and ISO 27001, and help organizations to implement and maintain these
frameworks by providing expert guidance, tools, and resources.
In summary, CaaS is a cloud-based compliance management solution that helps organizations to manage
compliance with regulatory requirements, industry standards, and internal policies. CaaS provides several
benefits, including reduced costs, improved efficiency, enhanced security, simplified management, and
compliance expertise.
1. Reduced costs: CaaS eliminates the need for organizations to invest in costly compliance
infrastructure and personnel, and provides a cost-effective way to manage compliance.
2. Improved efficiency: CaaS automates many compliance-related processes, such as risk
assessments, audit management, and compliance reporting, which can save time and resources.
3. Enhanced security: CaaS providers implement robust security measures to protect sensitive data
and ensure compliance with security regulations.
4. Simplified management: CaaS provides a centralized way to manage compliance activities and
documentation, which can simplify compliance management and reduce the risk of errors.
5. Compliance expertise: CaaS providers offer compliance expertise and guidance, which can help
organizations to navigate complex regulatory requirements and industry standards.
• AWS Compliance Center: A cloud-based compliance management solution that provides tools
and resources to help organizations comply with regulatory requirements and industry standards.
• Microsoft Compliance Manager: A cloud-based compliance management solution that
provides tools and resources to help organizations comply with regulatory requirements and
industry standards.
• IBM OpenPages: A cloud-based governance, risk, and compliance management solution that
helps organizations to manage compliance with regulatory requirements and industry standards.
UNIT 3
Q.1 What are Virtualization Technologies
Virtualization is a technology that allows multiple operating systems (OS) or applications to run on a
single physical machine, often called a host. It creates a layer of abstraction between the software and the
underlying hardware, enabling multiple virtual machines (VMs) to run independently on a single physical
machine. Each virtual machine is allocated a portion of the physical resources, including CPU, memory,
storage, and network bandwidth, as if it were running on a dedicated machine.
In summary, virtualization is a technology that allows multiple operating systems or applications to run
on a single physical machine, creating a layer of abstraction between the software and the underlying
hardware. Virtualization offers several benefits for businesses and organizations, including resource
optimization, increased flexibility, improved reliability, and reduced costs. There are several types of
virtualization technologies, including server, desktop, application, network, and storage virtualization.
1. Server virtualization: This is the most common form of virtualization, where multiple virtual
machines are created on a single physical server.
2. Desktop virtualization: This involves creating virtual machines that run desktop operating
systems and applications on a centralized server, and allowing users to access them remotely.
3. Application virtualization: This involves encapsulating applications in a virtual environment, so
they can be run on any computer without requiring installation or modification of the underlying
operating system.
4. Network virtualization: This involves abstracting network resources, such as switches, routers,
and firewalls, to create virtual networks that can be used by virtual machines.
5. Storage virtualization: This involves abstracting physical storage resources, such as hard drives
and storage arrays, to create virtual storage devices that can be managed and allocated
independently of the underlying hardware.
Load balancing and virtualization are two technologies that work together to improve the performance,
reliability, and scalability of modern data center environments.
• Load balancing is the process of distributing network traffic across multiple servers or resources to
ensure that no single resource is overwhelmed with traffic, and to optimize resource utilization.
Load balancing can be achieved through hardware or software solutions, such as load balancers or
application delivery controllers (ADCs), that sit between the client and server and direct traffic
based on factors such as server health, capacity, and response time.
• Virtualization, on the other hand, creates a layer of abstraction between the physical hardware and
the software or applications that run on it. Virtualization allows multiple virtual machines (VMs) to
run on a single physical machine, effectively maximizing the use of physical resources and
improving the flexibility and scalability of the infrastructure.
In summary, load balancing and virtualization are two technologies that can work together to improve
the performance, reliability, and scalability of modern data center environments. Load balancing can help
distribute network traffic across multiple resources, while virtualization can maximize the use of physical
resources and provide flexibility and scalability. Some common load balancing and virtualization
solutions include VMware vSphere, Citrix ADC, and Microsoft Hyper-V.
• VMware vSphere: A virtualization platform that includes load balancing and resource allocation
features.
• Citrix ADC: A hardware or software-based load balancing and application delivery platform.
• Microsoft Hyper-V: A virtualization platform that includes load balancing and failover features.
Overall, hypervisors are an essential component of virtualization technology, allowing multiple virtual
machines to run on a single physical machine. They provide several benefits, including resource
optimization, isolation, scalability, and disaster recovery.
1. Type 1 hypervisors: These hypervisors run directly on the host machine's hardware and manage
the virtual machines. They are also called bare-metal hypervisors because they don't require an
underlying operating system. Examples of Type 1 hypervisors include VMware ESXi, Microsoft
Hyper-V, and Citrix Hypervisor.
2. Type 2 hypervisors: These hypervisors run as an application on top of an existing operating
system. They are also known as hosted hypervisors. Examples of Type 2 hypervisors include
Oracle VirtualBox, VMware Workstation, and Parallels Desktop.
Hypervisors are used in virtualization technology, which allows multiple virtual machines to run on a
single physical machine. Virtualization technology provides several benefits, including:
1. Resource optimization: Hypervisors allow multiple VMs to share the resources of a single
physical machine, such as CPU, memory, and storage, which can help optimize resource
utilization.
2. Isolation: Each VM runs in its own isolated environment, providing a layer of security and
allowing multiple applications to be run on a single physical machine without interfering with each
other.
3. Disaster recovery: Virtual machines can be easily backed up and restored in the event of a
hardware failure or other disaster.
Machine imaging is the process of creating a digital image of a computer or server's entire operating
system, configuration settings, installed software, and data. These images are commonly used in
virtualization, cloud computing, and deployment of new systems. By creating an image of a system, an
exact replica can be quickly deployed to new machines, reducing the time and effort required for system
configuration.
Some popular imaging software includes Clonezilla, Acronis True Image, and Norton Ghost. Overall,
machine imaging is an important technology for virtualization, cloud computing, and system deployment
that provides several benefits, including consistency, time savings, scalability, and disaster recovery.
1. Prepare the system: Before creating an image, the system should be prepared by updating
software, removing unnecessary files, and ensuring that the system is in a stable state.
2. Capture the image: The next step is to capture the image of the system. This is typically done
using imaging software that creates a compressed image of the entire system.
3. Store the image: The image is then stored in a location that can be accessed later when
deploying new systems. This could be on a local storage device or on a cloud storage service.
4. Deploy the image: Once the image has been created and stored, it can be deployed to new
machines. This is typically done using deployment software that can install the image onto new
hardware.
Q.5 Explain Porting Applications in cloud computing.
Porting applications in cloud computing involves adapting an application to run on a cloud platform, such
as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP). The process of
porting applications in cloud computing is similar to the traditional application porting process but may
involve additional steps specific to cloud platforms.
Porting applications to cloud computing platforms can provide several benefits, including scalability,
reliability, and cost savings. However, it can be a complex process that requires careful planning and
execution to ensure that the application works correctly in the cloud environment.
1. Analyze the application: The first step is to analyze the application to understand its structure,
dependencies, and requirements. This will help to determine the best cloud platform to use.
2. Identify target cloud platform: Once the application has been analyzed, the next step is to
identify the target cloud platform that will host the application. This may involve considering
factors such as cost, scalability, and availability.
3. Assess compatibility: After identifying the target cloud platform, the application must be
assessed to determine its compatibility with the platform. This may involve identifying any cloud-
specific services or APIs that the application may need to use.
4. Modify the application: Based on the assessment, the application may need to be modified to
make it compatible with the target cloud platform. This may involve updating code, libraries, and
other dependencies.
5. Containerize the application: To make the application more cloud-native, it may be necessary
to containerize the application using tools such as Docker. This will enable the application to be
deployed as a container and managed more easily in the cloud environment.
6. Test the application: Once the modifications have been made and the application has been
containerized, it should be thoroughly tested to ensure that it works correctly on the cloud
platform.
7. Deploy the application: Once the application has been tested and verified to work correctly on
the cloud platform, it can be deployed for use.
Q.6 Explain What is Virtual Machines Provisioning and
Manageability Virtual Machine Migration Services?
Virtual machine provisioning is the process of creating and configuring virtual machines (VMs) on a
physical server or in a cloud environment. Once created, VMs can be managed using various tools and
techniques, including virtual machine migration services.
Virtual machine migration services enable the movement of a virtual machine from one physical server to
another or from one cloud environment to another. This can be necessary for a variety of reasons, such
as load balancing, maintenance, disaster recovery, or migration to a different cloud platform. The
migration process can be complex and may require downtime for the application running on the VM.
Virtual machine provisioning and migration services are critical components of cloud computing,
enabling businesses to scale their applications, improve availability and recoverability, and optimize
resource usage. However, they require careful planning and execution to ensure that they are performed
efficiently and with minimal disruption to the applications running on the VMs.
Here are some virtual machine migration services commonly used in cloud computing:
1. Live migration: Live migration is a technique used to move a running virtual machine from one
physical host to another without interrupting the application running on the VM. This is achieved
by transferring the state of the virtual machine from the source to the destination host while the
application continues to run.
2. Storage migration: Storage migration is a technique used to move the storage associated with a
virtual machine from one physical host to another. This can be necessary if the storage device is
running out of space or if there is a need to balance the storage load across different hosts.
3. Cloud migration: Cloud migration is the process of moving virtual machines and associated data
from one cloud environment to another. This may involve using migration tools provided by the
cloud vendor or third-party migration services.
4. Disaster recovery: Disaster recovery is the process of moving virtual machines to a secondary
location in the event of a disaster or outage in the primary data center. This may involve
replicating VMs and data to the secondary site and using live migration techniques to switch over
to the secondary site in the event of an outage.
Suppose a business wants to migrate its on-premise application to the cloud. The application is currently
running on a physical server in the business's data center.
Virtual machine provisioning and migration enable businesses to leverage the benefits of cloud
computing, such as scalability, availability, and cost savings. By following best practices and using the
right tools, businesses can migrate their applications to the cloud with minimal disruption and achieve
their desired business outcomes.
To migrate the application to the cloud, the business can follow these steps:
1. Choose a cloud provider: The business must select a cloud provider that meets its
requirements, such as cost, availability, scalability, and security.
2. Create a virtual network: The business must create a virtual network on the cloud platform that
will allow the virtual machines to communicate with each other and with the internet.
3. Provision virtual machines: The business must create virtual machines on the cloud platform
that are configured with the same operating system and applications as the on-premise server.
4. Migrate data: The business must migrate the data from the on-premise server to the virtual
machines on the cloud platform. This may involve using a cloud storage service, such as Amazon
S3 or Microsoft Azure Storage, to store the data.
5. Test the application: Once the data has been migrated, the business must test the application to
ensure that it works correctly on the cloud platform.
6. Live migration: If the business wants to achieve high availability, it can use live migration to
move the virtual machines to different physical hosts in the cloud environment. This will ensure
that the application remains available even if a physical host fails.
7. Storage migration: The business can use storage migration to move the data associated with the
virtual machines to different storage devices or volumes in the cloud environment. This can help
to balance the storage load and improve performance.
Q.8 Provisioning in the Cloud Context
In cloud computing, provisioning refers to the process of allocating and configuring the necessary
resources, such as compute, storage, and networking, to support an application or service. Provisioning
can be done manually or using automated tools, such as cloud management platforms and APIs.
Provisioning in the cloud context allows businesses to quickly and easily deploy applications and
services, without the need for upfront capital expenditure on hardware and infrastructure. With cloud
provisioning, businesses can scale their resources up or down as needed, providing flexibility and agility
to meet changing business needs.
1. Choosing the cloud provider: The first step in cloud provisioning is to choose a cloud provider
that meets the requirements of the application or service. The business needs to consider factors
such as cost, availability, performance, and security.
2. Selecting the service model: The next step is to choose the service model that best suits the
requirements of the application. The three primary service models are Infrastructure as a Service
(IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).
3. Defining the requirements: Once the cloud provider and service model are selected, the
business needs to define the requirements of the application, such as the amount of compute and
storage resources needed, the operating system, and any additional software requirements.
4. Allocating resources: After defining the requirements, the cloud provider allocates the necessary
resources to support the application. This involves creating virtual machines or containers,
provisioning storage and networking, and configuring security settings.
5. Configuring the environment: Once the resources are allocated, the business needs to
configure the environment, such as installing the required software, setting up the network, and
configuring security settings.
6. Testing the application: After the environment is configured, the business needs to test the
application to ensure that it works as expected.
7. Monitoring and scaling: Finally, the business needs to monitor the application and scale the
resources as needed to ensure that the application performs well and meets the demands of the
users.
UNIT 4
Q.1 Explain what is administrating the clouds?
Administering the cloud refers to the process of managing and maintaining the infrastructure, platforms,
applications, and services provided by cloud service providers. Cloud administration involves tasks such
as provisioning resources, monitoring performance, configuring security, managing user access, and
troubleshooting issues.
Cloud administrators are responsible for ensuring that cloud services are available, scalable, and secure.
They work with cloud service providers to configure and optimize the infrastructure, platforms, and
services based on the requirements of their organization. They also monitor the performance of cloud
services and resources to ensure that they meet service level agreements (SLAs).
The responsibilities of cloud administrators can vary depending on the organization's needs and the level
of cloud services being used. For example, administrators of Infrastructure-as-a-Service (IaaS) are
responsible for managing virtual machines, storage, and networking, while administrators of Platform-as-
a-Service (PaaS) are responsible for managing the underlying platform and runtime environments.
In summary, administering the cloud involves managing and maintaining the cloud infrastructure,
platforms, and services provided by cloud service providers to ensure that they are available, scalable,
and secure. Cloud administrators must have a deep understanding of cloud computing technologies, as
well as proficiency in using cloud management tools and platforms.
There are several cloud management products available in the market that can help organizations to
manage their cloud infrastructure, platforms, and services effectively. Here are some popular cloud
management products:
These emerging cloud management standards are designed to promote interoperability, portability, and
manageability in cloud environments. By adopting these standards, organizations can improve their
ability to manage and monitor cloud resources across different platforms, reducing vendor lock-in and
improving their overall cloud experience.
In summary, securing the cloud requires a multi-layered approach that includes identity and access
management, encryption, network security, patch management, compliance, and security monitoring and
incident response. By adopting these measures, organizations can ensure the security and privacy of their
data and applications in the cloud.
Here are some key measures to consider when securing the cloud:
1. Identity and Access Management (IAM): IAM is the process of managing user access to cloud
resources. It involves setting up user accounts, assigning permissions and roles, and enforcing
access policies. IAM can help to prevent unauthorized access and ensure that only authorized
users can access sensitive data and applications.
2. Encryption: Encryption is the process of encoding data in a way that can only be decrypted by
authorized parties. Cloud providers typically offer encryption for data in transit and at rest, but
organizations may also want to encrypt their data before sending it to the cloud. This can help to
prevent data breaches and ensure data confidentiality.
3. Network Security: Network security is the process of securing the network infrastructure used to
access cloud resources. This involves using firewalls, intrusion detection and prevention systems,
and virtual private networks (VPNs) to prevent unauthorized access to cloud resources.
4. Patch Management: Patch management involves regularly updating software and firmware to
address known security vulnerabilities. Cloud providers typically manage the patching of their
own infrastructure, but organizations are responsible for patching their own applications and
operating systems running in the cloud.
5. Compliance: Compliance involves meeting regulatory requirements for data security and
privacy. Cloud providers typically offer compliance certifications for specific regulations, such as
HIPAA and GDPR, but organizations are responsible for ensuring that their own applications and
data are compliant with these regulations.
6. Security Monitoring and Incident Response: Security monitoring involves monitoring cloud
resources for security threats and vulnerabilities. Incident response involves responding to
security incidents and taking appropriate actions to mitigate their impact. Cloud providers
typically offer security monitoring and incident response services, but organizations should also
have their own security monitoring and incident response plans in place.
Q.5 What is securing data in cloud
Securing data is an essential part of any organization's information security strategy, particularly when
data is stored and processed in the cloud.
Securing data is an essential part of any organization's information security strategy because it helps to
protect confidential information, meet compliance requirements, prevent data breaches, maintain
business continuity, and build trust with customers and partners.
In summary, securing data in the cloud requires a multi-layered approach that includes encryption,
access controls, multi-factor authentication, data loss prevention, regular monitoring and auditing, and
strong password policies. By adopting these best practices, organizations can help to ensure the
confidentiality, integrity, and availability of their data in the cloud.
Here are some best practices for securing data in the cloud:
1. Use Strong Encryption: Encrypting data is one of the most effective ways to secure it. In the
cloud, data can be encrypted in transit and at rest. It's recommended to use strong encryption
algorithms such as Advanced Encryption Standard (AES) and encrypt data before sending it to the
cloud. Data should also be encrypted while at rest in the cloud.
2. Implement Access Controls: Access control is the process of regulating access to data based on
user roles and permissions. Access controls should be implemented to ensure that only authorized
users have access to sensitive data. Role-based access control (RBAC) is a commonly used
technique for controlling access to cloud resources.
3. Use Multi-Factor Authentication (MFA): Multi-factor authentication adds an extra layer of
security by requiring users to provide two or more forms of identification. This can include
something they know, something they have, or something they are. MFA should be used to
protect sensitive data in the cloud.
4. Implement Data Loss Prevention (DLP): Data Loss Prevention (DLP) is a set of tools and
processes used to prevent sensitive data from being lost, stolen, or compromised. DLP should be
implemented to prevent sensitive data from being accidentally or maliciously leaked in the cloud.
5. Regularly Monitor and Audit Access: Regular monitoring and auditing of access logs is
essential to ensure that access controls are working as intended. This will help to identify any
unusual or suspicious activity and allow organizations to take prompt action to prevent data
breaches.
6. Implement Strong Password Policies: Strong password policies should be implemented to
ensure that users choose strong passwords that cannot be easily guessed or cracked. Passwords
should be changed regularly and should not be reused.
Q.6 Explain in detail Establishing Identity and Presence in
cloud computing:
Establishing identity and presence in cloud computing refers to the process of authenticating and
authorizing users and devices to access cloud resources and services. This is essential to ensure that only
authorized users have access to sensitive data and applications, and to prevent unauthorized access and
data breaches.
In summary, establishing identity and presence in cloud computing involves implementing identity
management, multi-factor authentication, single sign-on, role-based access control, identity federation,
and secure network connections. By implementing these measures, organizations can ensure that only
authorized users have access to cloud resources and services, and can prevent unauthorized access and
data breaches.
Here are some key considerations for establishing identity and presence in cloud computing:
1. Identity Management: Identity management involves managing the identities of users and
devices that access cloud resources. This includes authentication, authorization, and access
control. Identity management can be centralized using tools such as Active Directory or LDAP, or
can be managed in the cloud using cloud-specific identity management services such as AWS
Identity and Access Management (IAM) or Google Cloud Identity and Access Management (IAM).
2. Multi-Factor Authentication: Multi-factor authentication (MFA) is a security mechanism that
requires users to provide two or more forms of identification to access cloud resources. This can
include something they know, something they have, or something they are. MFA is an effective
way to prevent unauthorized access to cloud resources and should be used wherever possible.
3. Single Sign-On: Single sign-on (SSO) is a mechanism that allows users to log in once and access
multiple cloud resources without the need to re-authenticate. This can simplify the user
experience and reduce the risk of password-related security issues such as password reuse and
password sharing.
4. Role-Based Access Control: Role-based access control (RBAC) is a mechanism that assigns
permissions to users based on their roles within the organization. RBAC can be used to ensure that
users only have access to the cloud resources and services that are required for their job functions.
This can help to prevent unauthorized access to sensitive data and applications.
5. Identity Federation: Identity federation allows users to use their existing identities from external
systems, such as social media accounts, to access cloud resources. This can simplify the user
experience and reduce the need for users to create and manage additional identities for cloud
resources.
6. Secure Network Connections: Establishing secure network connections between users and
cloud resources is essential to prevent unauthorized access and data breaches. This can be
achieved using secure protocols such as SSL/TLS, IPsec, or SSH.
In cloud computing, SANs are typically used in conjunction with virtualization technologies such as
VMware, Hyper-V, or KVM. Virtualized servers and applications can be configured to access storage
devices on the SAN using virtual storage area network (VSAN) technologies. This allows organizations to
create virtualized storage environments that are highly available, scalable, and secure.
In summary, SANs are a key storage architecture used in cloud computing to provide high-speed, reliable
storage for virtualized servers and applications. They offer benefits such as storage consolidation, high
availability, scalability, performance, backup and recovery, and security.
Here are some key considerations for using SANs in cloud computing:
1. Storage Consolidation: SANs allow multiple servers and applications to access a shared pool of
storage devices. This can simplify storage management and reduce the need for individual storage
devices for each server or application.
2. High Availability: SANs are designed for high availability and reliability. They often include
features such as redundant components, hot-swappable drives, and automatic failover to ensure
that storage is always available to virtualized servers and applications.
3. Scalability: SANs can be easily scaled by adding additional storage devices to the network. This
allows organizations to easily expand their storage capacity as their needs grow.
4. Performance: SANs are designed for high-speed, low-latency access to storage devices. This is
essential for virtualized servers and applications that require fast and reliable access to storage.
5. Backup and Recovery: SANs can be used to implement backup and recovery solutions for
virtualized servers and applications. This can include features such as snapshot backups,
replication, and disaster recovery.
6. Security: SANs can be secured using a variety of security mechanisms, such as access control,
encryption, and network isolation. This is important to protect sensitive data stored on the SAN.
Q.7 Explain What is Disaster Recovery in Clouds?
Disaster recovery (DR) is the process of recovering data, applications, and IT infrastructure after a
disruptive event such as a natural disaster, cyberattack, or hardware failure. In cloud computing, disaster
recovery is essential to ensure business continuity and to minimize the impact of disruptive events.
In summary, disaster recovery in clouds involves implementing backup and recovery solutions,
replication, DRaaS, multi-cloud disaster recovery, testing and maintenance, and security measures to
protect data and applications from disruptive events. By implementing these measures, organizations can
ensure business continuity and minimize the impact of disruptive events.
1. Backup and Recovery: Cloud providers typically offer backup and recovery solutions that can
be used to protect data and applications from data loss. This can include features such as snapshot
backups, incremental backups, and point-in-time recovery.
2. Replication: Replication is the process of copying data and applications from one location to
another. In cloud computing, replication can be used to create redundant copies of data and
applications in multiple geographic locations. This can help to ensure that data and applications
are available even if one location is impacted by a disruptive event.
3. Disaster Recovery as a Service (DRaaS): DRaaS is a cloud-based service that provides disaster
recovery capabilities for virtualized servers and applications. DRaaS can be used to replicate data
and applications to a secondary location, and to provide failover capabilities in the event of a
disruptive event.
4. Multi-Cloud Disaster Recovery: Multi-cloud disaster recovery involves replicating data and
applications across multiple cloud providers. This can provide additional redundancy and
flexibility, and can help to ensure that data and applications are available even if one cloud
provider experiences an outage.
5. Testing and Maintenance: Disaster recovery plans should be regularly tested and updated to
ensure that they are effective and up-to-date. This can include regular testing of backup and
recovery procedures, as well as testing of failover and recovery procedures.
6. Security: Disaster recovery plans should include security measures to protect data and
applications from cyberattacks and other security threats. This can include access control,
encryption, and network isolation.
UNIT 5
Q.1 Explain Risk Assessment and Management in cloud
computing.
Risk management is the process of identifying, assessing, and controlling threats to an organisation's
system security, capital and resources. Effective risk management means attempting to control future
outcomes proactively rather than reactively. In the context of cloud computing, risk management plans
are curated to deal with the risks or threats associated with the cloud security. Every business and
organisation faces the risk of unexpected, harmful events that can cost the organisation capital or cause it
to permanently close. Risk management allows organisations to prevent and mitigate any threats, service
disruptions, attacks or compromises by quantifying the risks below the threshold of acceptable level of
risks.
Process of Risk Management
Risk management is a cyclically executed process comprised of a set of activities for overseeing and
controlling risks. Risk management follows a series of 5 steps to manage risk, it drives organisations to
formulate a better strategy to tackle upcoming risks. These steps are referred to as Risk Management
Process and are as follows:
• Identify the risk
• Analyze the risk
• Evaluate the risk
• Treat the risk
• Monitor or Review the risk
let us briefly understand each step of the risk management process in cloud computing.
1. Identify the risk - The inception of the risk management process starts with the identification of the
risks that may negatively influence an organisation's strategy or compromise cloud system security.
Operational, performance, security, and privacy requirements are identified. The organisation should
uncover, recognise and describe risks that might affect the working environment. Some risks in cloud
computing include cloud vendor risks, operational risks, legal risks, and attacker risks.
2. Analyze the risk - After the identification of the risk, the scope of the risk is analyzed. The likelihood
and the consequences of the risks are determined. In cloud computing, the likelihood is determined
as the function of the threats to the system, the vulnerabilities, and consequences of these
vulnerabilities being exploited. In analysis phase, the organisation develops an understanding of the
nature of risk and its potential to affect organisation goals and objectives.
3. Evaluate the risk - The risks are further ranked based on the severity of the impact they create on
information security and the probability of actualizing. The organisation then decides whether the risk
is acceptable or it is serious enough to call for treatment.
4. Treat the risk - In this step, the highest-ranked risks are treated to eliminate or modified to achieve
an acceptable level. Risk mitigation strategies and preventive plans are set out to minimise the
probability of negative risks and enhance opportunities. The security controls are implemented in the
cloud system and are assessed by proper assessment procedures to determine if security controls are
effective to produce the desired outcome.
5. Monitor or Review the risk - Monitor the security controls in the cloud infrastructure on a regular
basis including assessing control effectiveness, documenting changes to the system and the working
environment. Part of the mitigation plan includes following up on risks to continuously monitor and
track new and existing risks.
Multi-tenancy is all about sharing. In terms of a cloud environment, it means that multiple customers – or
tenants – are served by a single instance of an application. While each tenant is physically integrated,
they are also logically separated; they share computing resources such as configurations, user
management rules and data – which can all be customized to some extent by the user.
Multi-tenant architecture is widely used in both public and private clouds. In fact, you probably use, or
are at least familiar with, this multi-tenant Software-as-a-Service (SaaS) applications: Google Apps,
Microsoft 365, Netflix and Shopify.
Benefit:
IT
Organization
Customer/End User
• Enable flexibility to scale application usage up or down quickly based on needs, thereby keeping
expenses in line with use (whether a subscription or per-user cost structure).
• Ease the burden of in-house IT resources and reduce need for on-premises infrastructure.
• Receive updates and new feature upgrades automatically.
Corrupted Data – While multi-tenant users are separated from each other at the virtual level, they are
physically integrated (sharing hardware, applications and even data). Although rare, if a cloud vendor has
an inadequately configured infrastructure, corrupted data from one tenant could spread to others.
Co-tenant and External Attacks – Lack of data isolation makes multi-tenant cloud infrastructure a
prime target for attacks. These attacks may be launched by a malicious tenant – perhaps a competitor –
against co-tenants or by an external source. Side-channel attacks usually happen because of a lack of
authorization controls for sharing physical resources and are based on information gleaned from
bandwidth monitoring or similar techniques.
Tenant Workload Interference – If one tenant creates an overload, it could negatively impact the
workload performance for other tenants.
Incorrectly Assigned Resources – Should a virtualization layer become compromised, it gives access
to any of the virtual machines running on the same physical host and may allow a malicious user to
change the configuration of the virtual machine. That could result in a loss of monitoring capabilities.
The following are some potential reasons why a cloud provider may experience a failure:
1. Technical Failure: Cloud providers rely on complex infrastructure to deliver their services.
Technical failures such as hardware failures, software bugs, or network issues can cause significant
disruptions.
2. Natural Disasters: Natural disasters such as earthquakes, hurricanes, or floods can damage
cloud provider infrastructure, leading to extended outages.
3. Cybersecurity Attacks: Cloud providers can be targeted by cybercriminals seeking to disrupt
their services, steal data, or hold it for ransom.
4. Service Provider Bankruptcy: If a cloud provider goes bankrupt or is acquired by another
company, it can impact the continuity of its services, leaving customers with little to no warning or
support.
1. Conduct a thorough risk assessment before selecting a cloud provider, evaluating their track
record for reliability, and their business continuity plans.
2. Implement a multi-cloud strategy, where data and applications are distributed across multiple
cloud providers to reduce the impact of an outage.
3. Develop a comprehensive disaster recovery plan that includes contingencies for cloud provider
failure.
4. Maintain regular backups of critical data and applications, so they can be quickly restored in the
event of an outage.
5. Ensure that their contracts with cloud providers include clear Service Level Agreements (SLAs)
that outline their responsibilities for uptime, data protection, and disaster recovery, and include
provisions for compensation in the event of service failure.
1. Unenforceable SLAs: SLAs can be difficult to enforce if the customer does not have the
resources or expertise to monitor and measure the provider's performance.
2. Misaligned Expectations: SLAs can create misaligned expectations between the customer and
provider, leading to disputes over service levels and performance.
3. Incomplete or Inaccurate Metrics: SLAs may include incomplete or inaccurate metrics, leading
to misunderstandings between the customer and provider.
4. Limited Remedies: SLAs may include limited remedies for breaches, which may not be sufficient
to compensate the customer for damages or losses resulting from service failures.
1. Conduct a thorough evaluation of a cloud provider's SLAs before signing a contract, ensuring that
they align with their business needs and objectives.
2. Ensure that SLAs include clear and measurable metrics, and that they have the resources and
expertise to monitor and measure the provider's performance.
3. Develop a comprehensive contingency plan in the event of service failure, including contingencies
for data backup and recovery, and alternate service providers.
4. Negotiate for SLAs that include clear remedies and compensation for breaches, and that reflect
the value of the services provided.
5. Ensure that SLAs are regularly reviewed and updated to reflect changes in the business
environment or technology.
Cloud computing security is a shared responsibility between cloud providers and their customers. Cloud
providers are responsible for securing the infrastructure that supports the cloud, while customers are
responsible for securing their data and applications in the cloud.
Cloud Malware:
• Cloud malware or malware in the cloud refers to cyberattacks on cloud computing-based systems
with malicious code and service. The cloud malware has made various cloud-based systems ideal for
cyber-attacks.
• Cloud malware is malicious code that targets a cloud platform. The malicious code is similar to what
you expect on computers and mobile devices. The difference is what the malware intends to do and
how it works to disrupt the cloud.
• Cloud malware is not primarily a concern to users but to businesses. Yes, as a customer using cloud
services, we want the platform to remain protected against malware for the safety and privacy of our
data. But there’s little to nothing we can do.
• Most established cloud providers enforce extraordinary security measures to defend against cloud-
based malware. As an end-user, you need not worry too much, but you should have an offline backup
of your essential data to be safe in case of a situation.
1. DDoS Attacks: DDoS (Distributed Denial of Service) attacks are a common type of attack on
cloud infrastructure, which aim to overwhelm cloud servers or applications with a flood of traffic
from multiple sources. These attacks can cause significant downtime and disrupt business
operations. To mitigate DDoS attacks, cloud providers typically implement network segmentation,
firewalls, and traffic monitoring to detect and block malicious traffic.
2. Hyperjacking: Hyperjacking is an attack in which an attacker gains unauthorized access to the
hypervisor layer of a virtualized environment in the cloud. This allows the attacker to gain control
over multiple virtual machines and access sensitive data or execute malicious code. To mitigate
hyperjacking attacks, cloud providers should implement strong hypervisor security measures, such
as secure boot and virtualization-based security.
3. Live Migration Attack: Live migration is a feature in cloud computing that allows virtual
machines to be moved between physical servers without downtime. A live migration attack occurs
when an attacker intercepts the live migration process and gains access to sensitive data or
controls the migrated virtual machine. To mitigate live migration attacks, cloud providers should
implement secure live migration protocols and network encryption to protect the data during
migration.
4. Hypercall Attacks: Hypercall attacks exploit vulnerabilities in the hypervisor's hypercall
interface, which allows virtual machines to communicate with the host system. These attacks can
lead to the compromise of the hypervisor and allow an attacker to gain control over multiple
virtual machines. To mitigate hypercall attacks, cloud providers should implement secure
hypervisor configurations, limit access to the hypercall interface, and regularly patch
vulnerabilities.
5. Cloud Storage: Cloud storage attacks can occur when sensitive data is stored in the cloud, and
an attacker gains unauthorized access to the data. This can occur due to weak access controls,
data leaks, or vulnerabilities in cloud storage services. To mitigate cloud storage attacks, cloud
providers should implement strong access controls, encryption, and regularly monitor and patch
vulnerabilities in storage systems.
Q.9 Explain Risk with Application licensing
One of the main risks with application licensing is non-compliance with licensing terms and conditions.
This can occur when an organization deploys an application without fully understanding the licensing
requirements or without obtaining the appropriate licensing agreements. This can lead to legal issues and
potential financial penalties for the organization.
Another risk is the potential for unexpected costs. Different cloud providers may have different pricing
models based on factors such as usage, storage, and data transfer. Organizations may also be charged for
additional features or services that they may not need or use. This can make it difficult for organizations
to accurately forecast the costs of running an application in the cloud and can lead to unexpected costs.
In addition, some cloud providers may have complex licensing models that can be difficult for
organizations to understand. This can make it challenging to ensure compliance with licensing terms and
can lead to misunderstandings or disputes with the cloud provider.
Finally, organizations may also face challenges when trying to manage application licensing across
multiple cloud providers. Each cloud provider may have different licensing models and requirements,
which can make it difficult to manage and ensure compliance across all environments.
To mitigate the risks associated with application licensing, organizations should take steps to understand
the licensing requirements for any application they deploy in the cloud. This may involve reviewing
licensing agreements, contacting vendors, or seeking legal advice. Organizations should also carefully
monitor their cloud usage to ensure that they are not exceeding their licensing agreements and incurring
unexpected costs. Finally, it is important to work with reputable cloud providers that have clear policies
around application licensing and compliance.
UNIT 6
Q.1 Explain the Integration of Private and Public Clouds.
Integration of private and public clouds, also known as hybrid cloud, is a strategy that allows
organizations to combine the benefits of both private and public cloud environments. With hybrid cloud,
organizations can host critical applications and data in their own private cloud environment, while
leveraging the scalability and cost-effectiveness of public cloud resources for non-critical workloads.
As hybrid cloud facilitates the maximum reliability, an e-commerce organisation can use a hybrid model
by hosting their website on a private cloud for security and their brochure in public cloud to take
advantage of scalability.
Alternatively, an organisation can host their application on a private cloud and take advantage of the
public cloud for ease of scalability of storage.
The integration of private and public clouds can bring several benefits, including:
1. Scalability: By integrating private and public clouds, organizations can quickly scale up or down
their computing resources according to their changing needs, without having to invest in
expensive on-premises infrastructure.
2. Cost-effectiveness: With hybrid cloud, organizations can optimize their spending by using public
cloud resources for non-critical workloads and reserving private cloud resources for mission-
critical applications.
3. Flexibility: Hybrid cloud allows organizations to choose the best deployment model for each
workload, depending on factors such as performance, security, and compliance.
However, integrating private and public clouds also introduces some challenges and risks,
including:
1. Security: Integrating private and public clouds requires ensuring that sensitive data and
applications are secure across both environments. This may require additional security measures,
such as encryption, identity and access management, and network segmentation.
2. Complexity: Hybrid cloud environments can be complex to manage and maintain, as they
require coordination across multiple cloud providers and technologies.
3. Data Integration: Integration of private and public clouds requires data integration across both
environments. This can be a challenging task, particularly when dealing with large volumes of data
or complex data architectures.
To address these challenges, organizations should carefully plan their hybrid cloud strategy, taking into
account their specific business requirements, workload characteristics, and IT capabilities. They should
also work closely with their cloud providers to ensure that their hybrid cloud environment is secure, cost-
effective, and easy to manage.
Q.2 Explain what are the best cloud practices.
Cloud computing offers a variety of benefits for organizations, including increased scalability, flexibility,
and cost-efficiency. However, to fully realize these benefits, it is important to follow best practices for
cloud adoption and management.
Here are some of the key best practices for cloud computing:
1. Plan ahead: Before adopting cloud computing, organizations should carefully assess their business
requirements, IT capabilities, and security and compliance needs. This will help them choose the right
cloud service provider and deployment model, and develop a comprehensive cloud strategy.
2. Choose the right service provider: Cloud service providers offer a range of services and features,
so it is important to choose a provider that meets your specific business requirements. Factors to
consider include pricing, reliability, security, compliance, and performance.
3. Optimize cloud usage: To maximize the benefits of cloud computing, organizations should optimize
their cloud usage by regularly monitoring and adjusting their cloud resources, such as compute and
storage instances, to meet their changing needs.
4. Secure cloud data: Cloud data should be protected with appropriate security measures, such as
encryption, access controls, and backups. Organizations should also regularly test and update their
security measures to stay ahead of emerging threats.
5. Ensure compliance: Organizations should ensure that their cloud usage is compliant with relevant
industry and government regulations, such as HIPAA, GDPR, and PCI DSS. They should also work
with their cloud service provider to ensure that the provider is compliant with relevant standards.
6. Train employees: Cloud computing requires a different set of skills and knowledge than traditional
IT, so it is important to train employees on cloud best practices and security measures.
7. Monitor and manage costs: Cloud computing can quickly become expensive if not managed
properly. Organizations should monitor their cloud usage and costs, and optimize their resources to
minimize unnecessary expenses.
8. Develop a disaster recovery plan: Organizations should develop a disaster recovery plan to ensure
that their cloud data and applications can be recovered in the event of a disruption or outage.
Overall, hosting web applications on Amazon Cloud can provide organizations with the scalability,
flexibility, and reliability they need to run modern web applications. AWS offers a comprehensive suite of
tools and services that can be used to build, deploy, and manage web applications of any size and
complexity.
Here are some of the key components of hosting web applications on Amazon Cloud:
1. Compute: AWS offers a range of compute services, including Elastic Compute Cloud (EC2),
which provides scalable computing capacity in the cloud. EC2 allows users to quickly and easily
launch and manage virtual machines, or instances, that can be used to host web applications.
2. Storage: AWS provides several storage solutions, including Simple Storage Service (S3), which is
a highly scalable and durable object storage service. S3 can be used to store and retrieve any
amount of data, and it can be used to host static web content, such as images, videos, and HTML
files.
3. Content Delivery: AWS provides a global content delivery network (CDN) called Amazon
CloudFront, which can be used to deliver static and dynamic web content, including video and
audio streaming. CloudFront caches content at edge locations around the world, which can help
improve website performance and reduce latency.
4. Database: AWS provides a range of database solutions, including Relational Database Service
(RDS) and DynamoDB, which can be used to store and manage web application data. RDS
provides a managed database service for popular database engines such as MySQL, PostgreSQL,
and Oracle, while DynamoDB is a fully managed NoSQL database.
5. Networking: AWS provides a range of networking services, including Virtual Private Cloud
(VPC), which allows users to provision a private, isolated section of the AWS Cloud. VPC can be
used to launch resources into a virtual network that closely resembles a traditional network, with
the added benefits of AWS scalability and flexibility.
Overall, hosting massively multiplayer games on the cloud requires a robust and scalable infrastructure
that can support millions of players and provide a seamless, responsive gaming experience. Cloud-based
gaming infrastructure provides game developers and publishers with the tools and resources they need to
create and manage large-scale online gaming environments that can scale to meet the demands of
modern gamers.
Here is a general overview of how hosting massively multiplayer games on the cloud works:
1. Compute and Storage: Cloud-based game hosting relies on large-scale compute and storage
resources to support the game's infrastructure, including game servers, matchmaking servers, and
databases. These resources are provisioned dynamically, based on the needs of the game and the
number of concurrent players.
2. Load Balancing: Load balancing is used to distribute the workload across multiple servers to ensure
that the game's infrastructure can handle the traffic generated by thousands or millions of players.
Load balancing helps to prevent server overloads, reduces latency, and ensures that the game
remains stable and responsive.
3. Auto-Scaling: Auto-scaling is an essential feature of cloud-based gaming infrastructure that enables
game servers to dynamically adjust capacity in response to changes in player demand. As player
traffic fluctuates, the game infrastructure can automatically scale up or down to ensure that there are
always enough resources to support the current player load.
4. Database Management: The database is a critical component of online games, as it stores game
state, player progress, and other critical data. Cloud-based game hosting solutions typically provide
managed databases that are optimized for gaming workloads and can scale to handle large volumes
of data.
5. Network Connectivity: The network is another essential component of online gaming, as it
connects players to the game servers and facilitates real-time gameplay. Cloud-based game hosting
solutions typically provide high-speed, low-latency networking infrastructure that is optimized for
gaming workloads.
A content delivery network (CDN) is a geographically distributed group of servers that caches content
close to end users. A CDN allows for the quick transfer of assets needed for loading Internet content,
including HTML pages, JavaScript files, stylesheets, images, and videos. The popularity of CDN services
continues to grow, and today the majority of web traffic is served through CDNs, including traffic from
major sites like Facebook, Netflix, and Amazon.
A properly configured CDN may also help protect websites against some common malicious attacks,
such as Distributed Denial of Service (DDOS) attacks.
• These Internet exchange points (IXPs) are the primary locations where different Internet providers
connect in order to provide each other access to traffic originating on their different networks. By
having a connection to these high speed and highly interconnected locations, a CDN provider is
able to reduce costs and transit times in high speed data delivery.
Cloud hosting allows these platforms to quickly scale up or down their resources based on the changing
demands of their users. This ensures that the platforms are always available and can handle the heavy
traffic loads, even during peak usage periods.
Overall, hosting Twitter and Facebook on the cloud requires careful planning, configuration, and
monitoring to ensure that the platforms are scalable, secure, and reliable. With the right cloud hosting
provider and services, these social media platforms can handle massive amounts of traffic and data while
providing a seamless user experience.
Hosting Twitter and Facebook on the cloud is typically done through a process known as cloud hosting
or cloud deployment.
1. Choose a cloud hosting provider: Twitter and Facebook would need to select a cloud hosting
provider, such as Amazon Web Services, Microsoft Azure, or Google Cloud Platform.
2. Select cloud services: After choosing a cloud hosting provider, Twitter and Facebook would
select the cloud services they need, such as virtual machines, storage, databases, and security
services.
3. Configure and deploy servers: Next, Twitter and Facebook would configure their servers to
meet their specific needs and deploy them on the cloud infrastructure.
4. Load balancing: Twitter and Facebook would then implement load balancing, which is the
process of distributing traffic across multiple servers to ensure that each server is operating at
optimal capacity.
5. Set up security measures: To protect user data and prevent cyber attacks, Twitter and
Facebook would need to set up security measures such as firewalls, encryption, and multi-factor
authentication.
6. Monitor and optimize performance: After deployment, Twitter and Facebook would need to
continually monitor their cloud hosting performance to ensure optimal performance and make any
necessary optimizations.