Overview of Cloud Computing by Asst. Prof. Lija Mishra

You are on page 1of 23

OVERVIEW

OF
CLOUD
COMPUTING

By - LIJA MISHRA
(Asst. Professor & Computer Educator)

BY:LIJA MISHRA 1
Before going to forward towards cloud computing, there are some questions to be answered
.They are:
 What is cloud?
 How does the cloud forms in the sky?
If we think from the prospective of a layman, a cloud is a mass of water drops or ice
crystals suspended in the atmosphere. It forms when the invisible water vapour in
the air condenses into visible water droplets or ice crystals. For this to happen, the
parcel of air must be saturated, i.e. unable to hold all the water it contains in vapour
form, so it starts to condense into a liquid or solid form.
But here we are not going to discuss about the cloud in meteorology, it’s the digital cloud
every day we are coming by.

[Figure 1]

Definition of Cloud in Computing Terms


 In the Computing terms, cloud is a vast network of remote servers that store and
manage data, run applications, and deliver services over the internet.
 In other words, we can say that, cloud is an extensive network of remote servers
around the world. These servers store and manage data, run applications, and deliver
content and services like streaming videos, web mail, and office productivity software
over the internet.
Why Cloud Computing ?
 For any IT company, we need a Server Room as its the basic need of IT
organizations.
 There should be a database server, mail server, networking, firewalls, routers,
modem, switches, configurable system, high internet bandwidth and
maintenance engineers, all should be present in the server room. In establishing
such an IT infrastructure, a huge financial cost is involved.
BY:LIJA MISHRA 2
 So, to overcome all these issues and to minimize the infrastructure cost, Cloud
Computing came into existence. It facilitates smaller organizations to access
computing infrastructure without making any significant initial investment.
Cloud Computing Paradigm
 The cloud computing paradigm is a way of providing IT resources and services like
servers, storage, databases, software, and networking over the internet. Instead of buying,
owning, and maintaining physical hardware or data centres, organizations can access
these resources from cloud service providers whenever they need them.
 It works like a utility service—just as you pay for electricity or water based on usage,
you pay for cloud resources on-demand, only for what you use.
 This approach makes it easy to scale up or down based on business needs without
worrying about infrastructure maintenance or upfront costs.
 In other words, you can say that the cloud computing paradigm is a computing
model that provides a virtualized pool of resources that can be accessed remotely and
used on-demand. This model allows users to run workloads on virtual machines in
data centres, which offers flexibility and scalability for computing tasks.

Let’s Understand the Working Of Cloud On Our Daily Life


Now we are going to discuss about the digital cloud .On daily basis, we are
experiencing the data rain. All the photos, documents, location traces, and memories are
created by us .Then weuse GPS, we're generating data; data is then uploaded to servers of
Google, Amazon, or Microsoft.
Just like rainwater gathers in lakes or reservoirs, the data we generate is stored in massive
data centres. These data centres are geographically distributed to ensure reliability and
accessibility.
When we access old photos, look at past locations on maps, or retrieve files, we’re
essentially pulling data back from these "cloud reservoirs." This retrieval happens almost
instantly, creating the illusion of "data raining back" whenever we need it.
Remember , we are getting memories of our special moments , we are getting our map traces
of visited places , So, from where these data are coming ?yes you are rightwe are the
source of this data rain.Actually we are the person saved data in those servers ,that we
are getting back just like the clouds.
I.e Every interaction we have with digital devices feeds into this cycle. The key is that we’ve
willingly entrusted this data to the cloud for convenience, memory preservation, or
productivity.

BY:LIJA MISHRA 3
The Cycle of Data:

Data
User Generation(Create &
Upload Data)

Storage(Cloud
Retrieval(We
Holds Data In
rain it back)
Servers)

[Figure2]

Shortly we can say that, we are both the source and the beneficiary of this
vast stream of data in the digital era and cloud is just a giant vessel that
makes this cycle possible.
Application Of Cloud Computing
Cloud computing is used across various industries and domains to provide scalable, flexible,
and cost-effective solutions for computing, storage, and application hosting.

Government
Remote IOT Geonomic & Public
Monitoring Research Sector

Telemedici BigData &


Streaming
nes Analytics
Services

Media &
E-Learning Entertainment Online
Banking

Business Education Software


Applications Developement
& E-Learning [Figure3]

Advantages Of Cloud Computing:


Data Storage & Backup
1. Cost Savings: pay-as-you-go pricing ,so, No upfront hardware costs.
2. Flexibility: Supports a variety of workloads and business needs.
BY:LIJA MISHRA 4
3. Scalability: Easily scale resources up or down. Resources are shared across multiple
geographical locations to solve large-scale problems
4. Accessibility: Access services from anywhere with an internet connection.
5. Maintenance-Free: Providers handle updates, backups, and hardware maintenance.
6. Disaster Recovery: Built-in redundancy and recovery options.
7. Collaboration: Tasks are distributed among multiple computers, which enhances
team collaboration through shared resources.
8. Innovation: Access to advanced technologies like AI, ML, and IoT.
9. Environmentally Friendly: Optimized resource use reduces carbon footprint.
10.Security: Robust security protocols and compliance from providers.
11.Lower Software Cost, Instant software updates, Increased computing Power,
Unlimited storage capacity
Evolution of Computing Paradigms In Connection With Cloud Computing
 The evolution of the computing paradigm reflects humanity's progress in
solving complex problems through technology. Each stage has introduced
innovations in how we process, store, and interact with data and systems.
 Computing paradigms can vary widely based on factors such as the underlying
hardware, programming models, and problem-solving strategies.
 It encompasses the principles, techniques, methodologies, and architectures that
guide the design, development, and deployment of computational systems.
 The choice of paradigm depends on factors such as the nature of the problem,
performance requirements, scalability, and ease of development.
 These evolution give birth to the followings computing :
1. Cloud Computing
2. Grid Computing
3. Utility Computing
4. Autonomic Computing
5. Distributed Computing
6. Parallel Computing
7. Cluster Computing
8. Mobile computing
9. Edge Computing
10.Artificial Intelligence and Machine Learning

BY:LIJA MISHRA 5
Grid Computing
 In simple terms ,Grid computing is a distributed computing model that connects
multiple computer systems or resources across various locations, allowing them to
work together to achieve a common goal.
 Grid Computing is a distributed computing model in which a network of computers
works together to perform large-scale tasks, such as processing massive datasets,
solving complex problems, or running simulations. It leverages the unused processing
power, storage, and other resources of multiple systems, often geographically
distributed, to achieve a common goal. This approach is particularly useful for
tackling computationally intensive tasks that exceed the capacity of a single
computer.
 Grid computing can be viewed as a subset of distributed computing, where a virtual
supercomputer integrates the resources of several independent computers that are
distributed across geographies. Computers participating in a grid contribute resources
such as processing power, network bandwidth, and storage capacity to perform
operations requiring high computational power. The overall grid architecture looks
like a single computing entity.
 In grid computing, each computing task is broken into small fragments and
distributed across computing nodes for efficient execution. Each fragment is
processed in parallel, and, as a result, a complex task is accomplished in less time.

Features Of Grid Computing

1. Sharing: Computers and resources can be shared among multiple organizations or


users, allowing for more efficient utilization of available power.
2. Scalability: The grid can easily incorporate additional nodes (computers) into the
network, allowing for significant scaling up of resources as needed.
3. Allow heterogeneous devices : Grid computing systems can include a mix of
different types of devices and architectures, from personal computers to high-
performance servers, across various operating systems.
4. Geographical Distribution: Grid resources can be spread out over large distances,
making it possible to harness computing power from different locations.
5. Fault Tolerance: Since resources may fail or become unavailable, grid computing
often includes mechanisms for handling failures, ensuring that the overall system can
continue to operate.

BY:LIJA MISHRA 6
6. Middleware: Grid computing commonly relies on middleware that manages the
distributed resources, providing necessary services such as scheduling, data
management, and security.
Application Of Grid Computing
1. Scientific Research: Collaborations across institutions for large-scale simulations and
data analysis, such as in genomics or climate modeling.
2. Financial Services: Risk management and complex computations that require
intensive processing.
3. High-Performance Computing: Tasks that require significant computational power,
such as statistical simulations or rendering visual effects in movies.
4. Data Analysis and Big Data: Processing large datasets from various sources quickly
and efficiently.
Advantages Of Grid Computing
1. Cost-effective resource utilization through sharing.
2. Enhanced performance due to parallel processing capabilities.
3. Ability to tackle complex problems that are beyond the capacity of individual systems.
Challenges To Be Taken Into consideration
Network Security: Protecting data and resources in a distributed environment can be
challenging.
Interoperability: Ensuring different systems and applications can work together seamlessly.
Management Complexity: Coordinating resources and tasks across a distributed
environment can be complex.

Popular Grid Computing Frameworks

 Globus Toolkit
 BOINC (Berkeley Open Infrastructure for Network Computing)
 Apache Hadoop (often used for grid-like big data processing)

Utility Computing
 Utility computing is a subset of cloud computing, emphasizing the pricing model
(pay-as-you-go), also known as pay-per-use or metered services where customer pays
for the services they use, rather than a flat rate.
 In other words , Utility Computing is a service providing model where computing
resources such as storage, processing power, and software are provided to customers
on a pay-as-you-go basis.
BY:LIJA MISHRA 7
 Scalability is the most important feature of utility computing which means the ability
to increase or decrease the size or power of a resource or IT solution to meet demand.
It allows businesses to adjust their computing resources without disrupting
infrastructure.
Features of Utility Computing
1. Availability: Resources are available whenever required , without any upfront
payment.
2. Pay-As-You-Go Pricing: Users are charged based on their actual usage of
resources (e.g., per hour, per GB).
3. Manageability : Resources can be scaled up or down dynamically to match
workload requirements.
4. Centralized Management : Resources are managed by a service provider,
removing the need for users to maintain infrastructure.
5. Flexibility : Users can access a wide range of computing services, from virtual
machines to application hosting.
Working Of Utility Computing
Utility Computing works in these 4 phases.
1. Resource Pooling
2. Virtualization
3. Metering
4. Dynamic Allocation

Resource Pooling:Service providers pool


together resources (servers, storage,
networks) and offer them to users over the
internet or private networks

Virtualization:Resources are virtualized to


ensure multiple users can share the same
physical infrastructure without interference.

Metering:Usage is tracked and billed,


ensuring transparency and cost efficiency

Dynamic Allocation:Resources are


automatically allocated or deallocated based
on demand.
[Figure 4]

BY:LIJA MISHRA 8
Advantages of Utility Computing
1. Cost Efficiency: Eliminates the need for upfront capital investment in hardware
and software.
2. Flexibility and Agility: Enables businesses to respond quickly to changing
demands.
3. Reduced Maintenance Overheads: Service providers handle infrastructure
maintenance and updates.
4. Accessibility: Resources can be accessed from anywhere via the internet.
5. Focus on Core Activities: Businesses can concentrate on innovation and
operations instead of managing IT infrastructure.

Drawbacks of Utility Computing


1. Dependency :Users rely on the service provider for uptime, maintenance, and
security.
2. Security Issues :Storing data and running applications on third-party infrastructure
introduces security and privacy risks.
3. Latency Issues :Performance may be affected by network delays, especially for
resource-intensive tasks.

Examples of Utility Computing


1. Cloud Computing Services: Amazon Web Services (AWS), Microsoft Azure, and
Google Cloud offer utility-based models.
2. Storage Services: Services like Dropbox and Google Drive charge users based on
the storage they consume.
3. Virtual Machines: Renting virtual servers from providers like AWS EC2.
Autonomic Computing
 Autonomic computing is widely used in cloud computing environments because it
brings self-monitoring, self-repairing, and self-optimizing capabilities that improve the
whole performance of the cloud system.
 Autonomic Computing is an advanced approach in computing systems where
machines and applications can manage themselves without direct human intervention.
 This concept is most important for developing self-sustaining, intelligent systems as it
aims to simplify and optimize IT operations, improve system reliability, and reduce
downtime while freeing up human resources for higher-level tasks.
 Autonomous computing is paving the way for more intelligent systems in virtually
every industry. By leveraging these technologies, companies can accelerate innovation
and maintain competitiveness in the rapidly evolving digital landscape.

BY:LIJA MISHRA 9
Key Features of Autonomous Computing
1. Self-Optimization: Continuous resource monitoring and adjustment and perform for
maximum efficiency.
2. Self-Configuration: System automatically configure itself based on changing
conditions and predefined goals.
3. Self-Healing: Identify, diagnose, and resolve issues without human intervention.
4. Self-Protection: Recognize threats, vulnerabilities and implement security measures
to mitigate risks.
Application Of Autonomous Computing

In Data Centres: Automated workload balancing, resource


allocation, and fault recovery.

In Healthcare: Automated diagnostics and


operational management in hospitals.

In Finance: Fraud detection and automated trading.

In Manufacturing: Self-regulating assembly lines and


predictive maintenance.

[Figure 5]

Components
1. AI and Machine Learning: Essential for pattern recognition, anomaly detection, and
decision-making in real time.
2. Edge Computing: Ensures faster decision-making by processing data closer to its
source.
3. IOT (Internet Of Things): Provides data for autonomous systems from connected
devices.
4. Cloud Computing: Scales resources and enables seamless system management.
Advantages
1. Efficiency: Reduces manual intervention and improves operational speed.
2. Cost Savings: Minimizes the need for human oversight, saving labour costs.
3. Reliability: Proactive system monitoring ensures uptime and system health.
4. Scalability: Adaptable to dynamic workloads and growing data demands.

BY:LIJA MISHRA 10
Challenges
1. Complexity: Requires advanced algorithms and significant initial investments.
2. Security Risks: Autonomous decisions might inadvertently expose vulnerabilities.
3. Ethical Concerns: Delegating decision-making raises questions about accountability.

Dynamic Data Centres


1st we should know what is a Data Center ?
A data centre in cloud computing is a physical location that stores and manages a
company's digital data, computing machines, and related hardware. Data centres are
made up of a network of computing and storage resources that allow users to share
applications and data.
Data centres are crucial for storing data that connects across multiple data centres, public and
private clouds, and at the edge. They can vary in size from a small server room to groups
of buildings spread across different locations.
Working Of Data Centres
A data centre works in a regular pattern. It includes virtual servers that connect through
available networking and communication components to store, transfer, and access
information digitally. Each server includes a separate processor, storage space, and memory
like a personal computer. The data centre uses different software for server clustering and
distributing the workload.
The Citadel is the largest data centre in the world, located in Tahoe, USA. It covers an area
of 7.2 million sq. ft. The facility runs 24/7 and offers services across different domains, from
technology to healthcare.
Dynamic Data Centres
Dynamic data centres are next generation data centres that are specially designed to respond
to changing demands. The underlying software and hardware layers in a dynamic data centre
responds to changing demands in a most efficient and basic ways.
 In other words, we can say that A dynamic data centre is a modern data centre that
uses new technologies, high-performance computing, big data analytics,
virtualizations, and the cloud.
 It’s designed to work dynamically to the changing levels of demand in more
fundamental and efficient ways. It is also known as Infrastructure 2.0 and Next-
Generation Data Centre.
 The basic premise of Dynamic Data Centre is that leveraging pooled IT resources
can provide flexible IT capacity, enabling the seamless, real-time allocation of IT
resources in line with demand from business processes.
 This is achieved by using server virtualization technology to pool computing
resources wherever possible, and allocating these resources on-demand using
automated tools.
BY:LIJA MISHRA 11
 This allows for load balancing and is a more efficient approach than keeping massive
computing resources in reserve to run tasks that take place.

Features of Dynamic Data Centres


1. Elastic Scalability: Resources can be scaled up or down based on demand.
Allows organizations to respond quickly to workload fluctuations.
2. Software-Defined Infrastructure (SDI):Utilizes software-defined networking
(SDN), storage (SDS), and compute. Enables automated resource management and
provisioning.
3. Automation and Orchestration: Processes like resource allocation,
monitoring, and updates are automated. Tools such as Kubernetes, Terraform, and
Ansible facilitate orchestration.
4. High Availability and Resilience: Built with redundancy and failover
capabilities.
5. Ensures minimal downtime and consistent performance.
6. Multi-Cloud and Hybrid Cloud Integration: Can operate across multiple
cloud providers or combine on-premises and cloud environments. Enhances
flexibility and avoids vendor lock-in.
7. Energy Efficiency: Dynamic allocation of resources helps minimize energy
consumption. Supports sustainability goals through optimized usage.
8. Real-Time Monitoring and Analytics: Continuous monitoring of performance
and health of infrastructure.AI and ML algorithms are often employed for predictive
analytics.
Benefits of Dynamic Data Centres:
1. Cost Efficiency: Pay-as-you-go pricing models reduce capital expenditure.
2. Agility: Quick adaptation to changing business needs.
3. Security: Advanced threat detection and mitigation strategies are integrated.
4. Enhanced Performance: Optimal resource utilization ensures high
performance.
Technologies Driving Dynamic Data Centres:
1. Cloud Platforms: AWS, Microsoft Azure, Google Cloud Platform.
2. Containerization: Docker, Kubernetes.
3. Hyper converged Infrastructure (HCI): Combines compute, storage, and
networking.
4. Artificial Intelligence(AI): For predictive maintenance and resource
optimization.
How dynamic data can be managed
 Dynamic cloud server helps in protecting the integrity and performance parameters of
the core data centre components.
 It includes firewall protections to define data centre security concerns and work
accordingly to resolve them. Managing dynamic data storage concerns isn’t easy.
Therefore, businesses look for a service provider to meet their needs.

BY:LIJA MISHRA 12
Hosting and Out-Sourcing In Cloud

Hosting - Cloud hosting is a service that allows businesses to outsource their


computing and storage resources to a cloud provider. In this model, the cloud provider
manages the infrastructure, security, and maintenance, while the business can access
their data, programs, and websites from remote servers.
Cloud hosting is a cheaper alternative to the traditional model of building and
managing a company's own data centres.

1st understand this real example of Hosting in Cooking


You host a dinner party at your home. You buy the ingredients, prepare the food, cook
the dishes, and serve your guests. You’re responsible for managing every step of the
process, from sourcing to serving.

Why This is Hosting:


You use your resources (kitchen, tools) and maintain control over the process.
You decide what to cook, how to prepare it, and when to serve it.
It’s a hands-on approach where everything is managed by you or your immediate team
(family/friends).
Example of Outsourcing in Cooking
You hire a catering service to prepare and serve the food for your party. You only
specify the menu or type of cuisine, and the caterers handle everything else, from
cooking to presentation.

Why This is Outsourcing:


You delegate the responsibility for cooking to an external service provider (the
catering company).
The caterers bring their expertise, equipment, and staff to deliver a ready-to-serve
meal.
You focus on enjoying the party or managing other aspects (decorations), without
worrying about cooking logistics.
When to Choose Hosting?
 You have a capable in-house team to manage cloud resources.
 Your focus is on direct control over the environment.
 Budget flexibility is a concern (pay-as-you-go).
When to Choose Outsourcing?
 You lack in-house expertise or resources.
 Your organization prioritizes strategic focus over operational management.
 You need consistent, expert-managed cloud operations.

BY:LIJA MISHRA 13
Hosting in Cloud Computing
 Hosting refers to the process of running applications, websites, or other services on
cloud-based servers. Instead of using physical on-premises servers, businesses utilize
cloud-hosted environments to manage workloads. i.e Hosting refers to running your
applications, websites, or services on servers provided by a cloud service provider
(CSP)
 Cloud hosting works through the process of virtualization. As mentioned above with a
virtual private server, a virtual layer is created on the server where content and other
data can be stored. Those virtual layers can then be replicated on other servers on the
cloud computing network, spread throughout other different regions across the world.
 Cloud hosting allows website and applications operators to add or remove resources
when necessary. That includes more RAM, storage space, or support services such as
security or data storage. Cloud hosting provides reliability and flexibility at a
manageable cost. Cloud hosting also provides robust data backup and disaster
recovery compared to shared or dedicated hosting on a single server.
Advantages of cloud hosting
 Scalability :Because cloud hosting does not rely on a single server to store and deliver
content, it can be easily scaled to meet the demands of a website or application by
spinning up more servers across the cloud network when usage increases.
 Flexibility :Cloud hosting allows for the freedom to use the appropriate solution that
any situation requires by instantly provisioning the parameters of the virtual machines
across the network.
 Cost :Cloud hosting often works on a pay-as-you-go model, meaning that costs can
also be scaled up or down depending on usage. In contrast, web hosting typically
works on a monthly or annual flat fee.
 Security :Cloud providers provide robust physical and virtual security of servers on
their network, protecting website and application data from malicious actors. Cloud
hosting security layers include firewalls, identity management and access control,
Secure Sockets Layers (SSL) for transmitting data, and more.
Outsourcing in Cloud Computing
 Outsourcing involves contracting third-party providers to manage and operate cloud
services, infrastructure, or IT functions. This can include both on-premises IT and
cloud services.
 In Cloud computing outsourcing a company hires a third-party cloud service provider
to handle their cloud computing projects. This can help companies reduce IT costs,
Increase agility and scalability, and Focus on core competencies.
 Cloud outsourcing is the deployment of specific functions and processes to a cloud
outsourcing provider. Under this arrangement, the cloud service provider is
responsible for running and maintaining the managed cloud services. Here, the
customer pays an ongoing subscription fee in exchange for guaranteed availability,
security, updates, and technical support.
BY:LIJA MISHRA 14
Advantages Of Outsourcing In Cloud Computing:
1. Cost savings :Outsourcing cloud computing can reduce the cost of buying,
deploying, and maintaining physical IT infrastructure. It can also eliminate the
costs of physical storage, such as heating, cooling, and electricity.
2. Security :Outsourcing cloud computing can provide stronger security than a
business might be able to implement on its own. Cloud service providers often
invest in security measures like firewalls, encryption, and intrusion detection
systems.
3. Centralized data security :Cloud computing platforms can integrate security
controls across hardware, firmware, identity, networking, data, and apps.
4. Enhanced compliance :Outsourcing cloud computing can help businesses stay
compliant with cybersecurity regulations.
5. Flexibility :Cloud computing allows businesses to scale up or down as their
needs change.
6. Centralized data management :Outsourcing cloud computing can improve
productivity by allowing employees to access applications and servers from
anywhere. It can also provide backups for storage in case of data loss.
7. Access to powerful tools :Outsourcing cloud computing gives businesses
access to powerful tools without having to build their own infrastructure.
Types Of Cloud Service Models
There are 3 Types of Cloud Computing Service Models. These service models are referred
to as a cloud computing stack, because each one is built on top of one another .They Are :
1. Infrastructure-as-a-Service (IaaS)
2. Platform-as-a-Service (PaaS)
3. Software-as-a-Service (SaaS)

Packed Software , OS &


Application Stack , Servers SaaS End Users

OS & Application Stack ,


Servers Storage Network PaaS Application
Developers

Servers Storage Network


IaaS Infrastructure &
Network Architects

[Figure 6]

Infrastructure-as-a-Service (IaaS) – Build pay-as-you-go IT infrastructure by renting


servers, virtual machines, storage, networks, and operating systems from a cloud provider.

BY:LIJA MISHRA 15
 That means ,this service delivers fundamental cloud computing services, such as
computing power, networking, and data storage, over the internet to users on-demand.
 With these virtualized services, there is no need to buy, store, and maintain physical
data servers and other equipment onsite.
 Instead, users simply rent access to the cloud infrastructure resources they need on a
pay-as-you-go-basis. Here client is charged only for the computing power that is
utilized, usually, CPU hours used a month.
 Example Of IaaS : AWS ,Cisco Metapod ,Microsoft Azure ,Google Compute Engine
(GCE) ,Linode and Rackspace.
Platform-as-a-Service (PaaS) – Provides an environment for building, testing, and
deploying software applications without focusing on managing underlying infrastructure.
 This gives users access to a complete cloud platform, including hardware, software,
and infrastructure, so that they can develop, run, and manage applications without
investing in expensive, bulky, and inflexible onsite premises.
 It Includes sub-categories like load balancers, firewalls, middleware, application
servers, HTTP servers, runtimes, libraries, and integrated development environment
(IDE).
 PaaS is billed as an additional cost on top of the IaaS charges.
 Examples of PaaS :include Microsoft Azure Elastic Beanstalk , Google Cloud
Infrastructure. Windows Azure ,Google App Engine ,Apache Stratos ,Red Hat
OpenShift
Software-as-a-Service (SaaS) – Users connect to and use cloud-based apps over the internet
. Means , this allows users to connect to and use cloud-based applications over the internet.
 Under this arrangement, the hosting, delivery, and maintenance of the software are
handled by the cloud service provider.
 It Includes industry applications like business process automation, customer
relationship management (CRM), enterprise resource planning (ERP), collaboration,
and email marketing.
 The data for the app runs on a server on the network, not through an app on the user’s
computer.
 Software is usually sold via subscription.
 Examples of SaaS :Microsoft 365 productivity tools , Google Workspace, Dropbox
,Salesforce ,Cisco WebEx ,Concur ,GoToMeeting ,Shopify ,MailChimp ,HubSpot
Google Applications (G Suite)
Workload Patterns For The Cloud
In the general sense, A workload, is the amount of time and computing resources a system
or network takes to complete a task or generate a particular output. It refers to the total
system demand of all users and processes at a given moment.

BY:LIJA MISHRA 16
 In a cloud computing context, workload refers to any service, application or capability
that consumes cloud-based resources. i.e virtual machines, databases, applications,
micro services , nodes all considered as workloads.
 Workload patterns for cloud computing describe the various types of workloads that
organizations can run in the cloud. These patterns help in optimizing resource
utilization, improving performance, and reducing costs by aligning workloads with the
appropriate cloud services and architectures.
Workloads Can Be Classified On The Following Basis
1. On the basis of Cloud Deployment Model
2. On the basis of Cloud Native Technology
3. On the basis of Usage Patterns
4. On the basis of Resource Requirements
Classifying Workloads by Cloud Deployment Model
Each of these cloud deployment models provides different levels of control and
customization to organizations, and choosing the right one depends on the specific
requirements of the workloads being hosted.
According to the cloud deployment model , 3 types of cloud workloads are :
1. Infrastructure as a Service (IaaS): IaaS is a cloud computing model where the cloud
provider offers virtualized computing resources, such as virtual machines (VMs),
storage, and networking, over the internet. IaaS is suitable for hosting and managing
infrastructure-level workloads, such as operating systems, databases, and storage.
2. Platform as a Service (PaaS): PaaS is a cloud computing model that provides a
platform for developing, running, and managing applications, without having to worry
about the underlying infrastructure. PaaS is suitable for hosting and managing
application-level workloads, such as web and mobile applications.
3. Software as a Service (SaaS): SaaS is a cloud computing model where the cloud
provider offers a complete software solution over the internet, typically on a
subscription basis. SaaS is suitable for hosting and managing software-level
workloads, such as email, customer relationship management (CRM), and human
resource management (HRM) systems.
Classifying Workloads by Cloud Native Technology
There are several technical approaches commonly used to run workloads in a cloud
environment. These include:
1. Virtual Machines (VMs): A software-based emulation of a physical server or
computer that allows multiple operating systems to run on a single physical host.
Cloud providers offer VMs as a service, which enables users to create, run, and
manage VMs in the cloud.
2. Containers: A lightweight and portable way to package and deploy applications.
Containers provide isolation between applications and their dependencies, allowing
them to run consistently across different environments.

BY:LIJA MISHRA 17
3. Container as a Service (CaaS): A cloud-based service that provides a fully managed
container environment. CaaS platforms abstract the underlying infrastructure and
provide developers with an easy-to-use interface for deploying and managing
containers. Popular CaaS platforms are AWS Fargate, Azure Container Instances, and
Google Cloud Run.
4. Server-less computing: Server-less computing, also known as Function as a Service
(FaaS), allows developers to write and deploy code without worrying about the
underlying infrastructure. Server-less platforms automatically scale up or down to
handle traffic spikes, and users only pay for the computing resources used while the
function is running.
Classifying Workloads by Usage Patterns
There are several different types of cloud workloads based on usage patterns and resource
requirements. Cloud workloads can be broadly categorized based on usage patterns as:
1. Static workloads: These are applications and services that have a consistent,
predictable workload and are typically running 24/7. Examples: web servers,email
services.
2. Periodic workloads: These are applications that have regular, recurring usage
patterns, such as data backups or batch processing.
3. Inconsistent workloads: These are applications that have varying and unpredictable
workloads, such as gaming platforms, e-Commerce sites, or applications that
experience spikes in traffic.
Classifying Workloads by Resource Requirements
Cloud workloads are also classified by their resource requirements:
1. Standard compute workloads: These workloads have a general-purpose resource
requirement and can include tasks such as web hosting, software development, and
test and development environments.
2. High CPU workloads: These require powerful central processing units (CPUs) for
tasks such as scientific simulations, data analytics, and batch processing.
3. High GPU workloads: These require powerful graphics processing units (GPUs) for
demanding tasks such as computer-aided design (CAD), scientific simulations, and
video rendering.
4. High performance computing (HPC) workloads: These are workloads that require
massive parallel computing, which is supported by large clusters of cloud-based
machines.
5. Storage-optimized workloads: These require large amounts of storage capacity and
high input/output (I/O) performance for tasks such as big data analytics, content
management, and backups.
6. Memory-intensive workloads: These require large amounts of memory for tasks
such as in-memory databases, real-time analytics, and caching.

BY:LIJA MISHRA 18
Big Data And Cloud Computing
Big data refers to the data, which is huge in size and increasing rapidly with respect to time.
Big data includes structured data, unstructured data as well as semi-structured data. Big data
cannot be stored and processed in traditional data management tools it needs specialized big
data management tools.
 Big data and cloud computing are two technologies that work together to store,
process, and analyse large amounts of data. Big Data in cloud computing refers to the
practice of storing, processing, and analysing massive volumes of data using cloud-
based platforms. Cloud computing provides the scalability, flexibility, and cost-
efficiency needed to handle the high velocity, variety, and volume of Big Data.
 Cloud Computing and Big Data are one of the most used technologies in today’s
Information Technology world. With these two technologies, business, education,
healthcare, research & development, etc. are growing rapidly and will provide various
advantages to expand their areas with tricks and techniques.
Characteristics of Big Data
 Volume: Large amounts of data from diverse sources.
 Variety: Diverse data types (structured, unstructured, semi-structured).
 Velocity: Rapid data generation and real-time processing.
 Veracity: Ensuring data quality and trustworthiness.
 Value: Deriving actionable insights from data.
Key Components of Big Data Architecture
 Data Sources
Types of Data Sources:
Structured: Databases, data warehouses.
Semi-Structured: Logs, JSON, XML.
Unstructured: Videos, images, audio, text.
Examples:
 IOT devices.
 Social media platforms.
 Business applications.

 Data Ingestion Layer


Work: Collect and import data from multiple sources into the system.
Techniques: Batch Ingestion Periodic data uploads.
Tools: Apache Sqoop, AWS Data Pipeline.
Stream Ingestion Real-time data collection.
Tools: Apache Kafka, Amazon Kinesis, Azure Event Hubs.
 Data Storage Layer
Work: Store ingested data for processing and analysis.
Options:
Data Lakes: Store raw, unprocessed data.
BY:LIJA MISHRA 19
Tools: Amazon S3, Azure Data Lake, Hadoop HDFS.
Data Warehouses: Optimized for structured, queryable data.
Tools: Snowflake, Google BigQuery, Amazon Redshift.
 Data Processing Layer
Work: Transform raw data into usable formats.
Processing Types: Batch Processing: Used for Large-scale data transformations.
Tools: Apache Hadoop, Apache Spark.
Real-Time/Stream Processing: Used for Low-latency processing for
immediate insights.
Tools: Apache Flink , Apache Storm, AWS Lambda.
Data Analysis Layer
Work: Perform analytics and generate insights.
Analytics Types:
Descriptive Analytics: Summarize historical data.
Predictive Analytics: Use machine learning for forecasting.
Prescriptive Analytics: Recommend actions based on insights.
Tools:
Analytics Platforms: Databricks, Google BigQuery.
Machine Learning: TensorFlow, PyTorch, AWS SageMaker.
Data Visualization Layer
Purpose: Present data insights through intuitive dashboards and reports.
Tools:
Tableau, Microsoft Power BI, Grafana.
Built-in tools: AWS QuickSight , Google Data Studio.
Data Security Layer
Work: Protect data from unauthorized access and ensure compliance.
Used For:
Encryption: Data encryption at rest and in transit.
Access Control: Role-based access control (RBAC).
Compliance: GDPR, HIPAA, or industry-specific regulations.

Cloud computing architecture


Cloud computing architecture refers to the structure and components that enable the delivery
of cloud services. It encompasses the front-end and back-end systems. Cloud computing
architecture is a combination of service-oriented architecture and event-driven architecture.
 Components of Cloud Computing Architecture
1. Front-End :The front-end is the client side of the architecture. It includes:
User Interfaces like Web browsers, mobile apps, or desktop applications that allow
users to access cloud services and End-user devices like smartphones, tablets, laptops,
and desktops.
2. Back-End :The back-end is the provider side, which manages and delivers the
cloud services. It includes
 Servers : Hosts for applications and databases)
 Storage Systems(Cloud storage solutions) to store data securely and reliably.
BY:LIJA MISHRA 20
 Databases: Relational and non-relational databases for managing structured and
unstructured data.
 Virtualization: Enables resource abstraction and efficient utilization.
 Networking: Connects cloud services with users and integrates components in the
cloud.
 Middleware: Software that enables communication between different applications and
services.
 Management Tools: For monitoring, scaling, and automating cloud resources.

[Figure 7]

Cloud Deployment Models


It works as your virtual computing environment with a choice of deployment model
depending on how much data you want to store and who has access to the
Infrastructure.

 Cloud deployment models describe the environment in which cloud services are
hosted and delivered to users. Each model defines the ownership, accessibility, and
resource-sharing characteristics of the cloud infrastructure. Organizations choose a
deployment model based on their business requirements, regulatory needs, and
technical capabilities.

BY:LIJA MISHRA 21
5 - Cloud Deployment Models Are :
1. Public Cloud
2. Private Cloud
3. Hybrid Cloud
4. Multi Cloud
5. Community Cloud

1. Public Cloud : A public cloud is owned and operated by third-party cloud service
providers. Resources are shared among multiple customers (multi-tenancy) over the internet.

 Resources are shared among multiple organizations.


 Example: AWS, Azure, Google Cloud.

2. Private Cloud : A private cloud is dedicated to a single organization. It can be hosted on-
premises or managed by a third party in a private environment.

 Offers greater control and customization.


 Example: On-premise infrastructure with OpenStack, VMware Private Cloud

3. Hybrid Cloud : A hybrid cloud combines public and private clouds, allowing data and
applications to move between them. This model provides flexibility and scalability while
maintaining control over sensitive

 Enables workload portability and data sharing between the two.


 Example: AWS Outposts, Azure Arc.

4. Multi-Cloud : A multi-cloud strategy uses services from multiple public cloud providers
to avoid vendor lock-in and optimize performance.

 Utilizes services from multiple cloud providers.


 Provides flexibility and avoids vendor lock-in.
 Example Combining AWS, Azure, and GCP for specific workloads.

5. Community Cloud :A community cloud is shared by multiple organizations with similar


requirements, such as regulatory needs or mission goals. It can be managed internally or by a
third party. Generally used by Government agencies sharing infrastructure ,Research
institutions collaborating on projects.

 Shared resources among a specific group.


 Cost-effective for organizations with common interests.
 Enhanced collaboration and data sharing.
 Examples: Health information exchanges, government clouds.
.

BY:LIJA MISHRA 22
BY:LIJA MISHRA 23

You might also like