0% found this document useful (0 votes)
7 views

computing

Uploaded by

pratyushnayak308
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

computing

Uploaded by

pratyushnayak308
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Computing

A computing paradigm refers to a fundamental approach or model for performing computation,


organizing data, designing systems of higher levels, and solving problems using computers.

Characteristics of Computing Paradigms


• It encompasses the principles, techniques, methodologies, and architectures that guide the
design, development, and deployment of computational systems.
• Computing paradigms can vary widely based on factors such as the underlying hardware,
programming models, and problem-solving strategies.
• Each computing paradigm offers different advantages, trade-offs, and suitability for specific
types of problems and applications.
• The choice of paradigm depends on factors such as the nature of the problem, performance
requirements, scalability, and ease of development.
• Many modern computing systems and applications combine multiple paradigms to use their
respective strengths and address complex challenges.

Types of High-Performance Computing Paradigm


• High-performance computing (HPC) refers to the use of advanced computing techniques
and technologies to solve complex problems and perform data-intensive tasks at speeds
beyond what a conventional computer could achieve.
• Different high-computing performance paradigms arise from various methodologies,
principles, and technologies used to solve complex computational problems.
• Each computing paradigm has its strengths, weaknesses, and specific use cases. The choice
of use of a paradigm depends on the nature of the problem to be solved, performance
requirements, and the available technology.
1. Distributed Computing :
• Distributed computing is defined as a type of computing where multiple computer systems
work on a single problem. Here all the computer systems are linked together and the
problem is divided into sub-problems where each part is solved by different computer
systems.
• The goal of distributed computing is to increase the performance and efficiency of the
system and ensure fault tolerance.
• Distributed computing involves the use of multiple computers connected to a network.
• Tasks are distributed across these distributed computers, and they work collaboratively to
achieve a common goal.
• Network computing, also known as distributed computing, refers to the use of
interconnected computers and resources to perform tasks collaboratively over a network.
• This infrastructure can include local area networks (LANs), wide area networks (WANs),
and the Internet.
• The client-server model is a common architecture in network computing.
• Network computing allows users to access resources and applications remotely.
• Resources such as processing power, storage, and applications are distributed across
multiple computers within the network. This enables users to access and utilize resources
located on different machines.
• Network computing facilitates collaboration among users by enabling them to share files,
work on documents simultaneously, and communicate in real time. Collaboration tools, such
as email, video conferencing, and collaborative document editing, are common in networked
environments.
• Network computing systems can be easily scaled by adding more computers or resources to
the network. This scalability allows organizations to adapt to changing demands and
accommodate growing workloads.
• In network computing, computers are connected to share resources, exchange information,
and work together to achieve common goals.
• This paradigm allows for the efficient use of resources, improved collaboration, and the
distribution of computational tasks across multiple devices.
• The evolution of network computing has contributed to the development of various
technologies, including cloud computing, edge computing, and distributed computing
systems.

• Cloud computing is an example of distributed computing.

2. Parallel Computing :
• Parallel computing is defined as a type of computing where multiple computer systems are
used simultaneously. Here a problem is broken into sub-problems and then further broken
down into instructions. These instructions from each sub-problem are executed concurrently
on different processors.
• Parallel computing is a computing paradigm where multiple computations or processes are
executed simultaneously to solve a single problem, typically to improve performance,
efficiency, and scalability.
• In parallel computing, multiple processors or multi-cores work simultaneously on different
parts of a problem.
• In parallel computing, tasks are divided into smaller subtasks that can be executed
concurrently on multiple processing units or cores, allowing for faster execution and higher
throughput compared to sequential processing.
• Parallel computing is used in various domains, including scientific simulations, data
analytics, image and signal processing, artificial intelligence, and computer graphics.
• Examples of parallel computing applications include weather forecasting, molecular
dynamics simulations, genome sequencing, deep learning training, and rendering complex
3D graphics.

• The goal of parallel computing is to save time and provide concurrency.

Grid Computing :
• Grid computing is defined as a type of computing where it is constitutes a network of
computers that work together to perform tasks that may be difficult for a single machine to
handle. All the computers on that network work under the same umbrella and are termed as
a virtual super computer.
• The tasks they work on is of either high computing power and consist of large data sets.
• All communication between the computer systems in grid computing is done on the “data
grid”.
• The goal of grid computing is to solve more high computational problems in less time and
improve productivity.
• Grid computing is a distributed computing paradigm that manages the computational
resources of multiple networked computers or clusters to solve large-scale computational
problems.
• Grid computing involves the coordination of geographically dispersed resources to work on
a common task.
• Grid computing systems consist of multiple nodes or sites interconnected by high-speed
networks, such as the Internet or dedicated communication links. Each node in the grid can
contribute its computational power and resources to the collective pool, creating a
distributed computing infrastructure.
• Grid computing relies on middleware software to manage resource discovery, allocation,
scheduling, security, and communication within the grid. Grid middleware provides a set of
services and APIs (Application Programming Interfaces) that abstract the underlying
infrastructure and facilitate the development and execution of grid applications.
• It typically involves pooling together computing resources from multiple locations to solve
large-scale problems.
• In grid computing, resources such as processing power, storage, and software applications
are shared across geographically distributed sites, allowing organizations to leverage idle
resources and collaborate on complex tasks.
• Grid computing facilitates collaborative research and scientific discovery by enabling
researchers and organizations to share data, computational resources, and expertise across
institutional boundaries.
• Examples of grid computing projects include the Open Science Grid (OSG), European Grid
Infrastructure (EGI), Worldwide LHC Computing Grid (WLCG) for high-energy physics,
and various academic and industrial grid initiatives.
• Grid computing offers several benefits, including increased computational power,
scalability, fault tolerance, and cost-effectiveness.

Utility Computing
• Utility computing is defined as the type of computing where the service provider provides
the needed resources and services to the customer and charges them depending on the usage
of these resources as per requirement and demand, but not of a fixed rate.
• Utility computing involves the renting of resources such as hardware, software, etc.
depending on the demand and the requirement.
• The goal of utility computing is to increase the usage of resources and be more cost-
efficient.
Autonomous Computing
• Autonomous computing represents a shift towards more intelligent, adaptive, and self-
sufficient systems that can operate autonomously in dynamic and complex environments. By
reducing the need for human intervention and enabling systems to adapt and evolve on their
own, autonomous computing promises to improve efficiency, reliability, and security across
a wide range of applications and industries.
• Autonomous computing involves self-managing systems that can adapt, optimize, and heal
themselves without human intervention.
• This paradigm is often associated with self-driving systems and autonomous agents.
• Autonomous computing refers to a computing paradigm in which systems and applications
are designed to operate and manage themselves with minimal human intervention.
• Autonomous computing systems are capable of self-configuration, self-optimization, self-
healing, and self-protection, allowing them to adapt to changing conditions, handle failures,
and optimize performance autonomously.
• Autonomous computing systems can configure themselves automatically based on
predefined policies, environmental conditions, or user preferences. This includes tasks such
as resource allocation, network configuration, and software installation.
• Autonomous computing systems continuously monitor their performance and make
adjustments to optimize resource utilization, throughput, and efficiency. This may involve
dynamically adjusting parameters, tuning algorithms, or reallocating resources to meet
changing demands.
• Autonomous computing systems have built-in mechanisms to detect and recover from
failures or disruptions automatically. This includes fault detection, fault isolation, and fault
recovery techniques that enable systems to maintain availability and reliability in the face of
failures.
• Autonomous computing systems leverage machine learning and artificial intelligence (AI)
techniques to make decisions, learn from experience, and adapt to changing environments
autonomously. This enables systems to improve their performance over time and anticipate
future challenges.
• Autonomous computing systems rely on policy-based management to define rules,
constraints, and objectives that guide their behavior. Policies specify desired outcomes,
constraints, and thresholds, allowing systems to make autonomous decisions while adhering
to organizational policies and requirements.
• Autonomous computing has applications in various domains, including cloud computing,
data centers, IoT (Internet of Things), autonomous vehicles, robotics, and smart
infrastructure.
• Examples include self-driving cars, autonomous drones, self-healing networks, and
automated cloud management platforms.

Cloud Computing
• Cloud is defined as the usage of someone else’s server to host, process or store data.
• Cloud computing is defined as the type of computing where it is the delivery of on-demand
computing services over the internet on a pay-as-you-go basis. It is widely distributed,
network-based and used for storage.
• There type of cloud are public, private, hybrid and community and some cloud providers are
Google cloud, AWS, Microsoft Azure and IBM cloud.

You might also like