computing
computing
2. Parallel Computing :
• Parallel computing is defined as a type of computing where multiple computer systems are
used simultaneously. Here a problem is broken into sub-problems and then further broken
down into instructions. These instructions from each sub-problem are executed concurrently
on different processors.
• Parallel computing is a computing paradigm where multiple computations or processes are
executed simultaneously to solve a single problem, typically to improve performance,
efficiency, and scalability.
• In parallel computing, multiple processors or multi-cores work simultaneously on different
parts of a problem.
• In parallel computing, tasks are divided into smaller subtasks that can be executed
concurrently on multiple processing units or cores, allowing for faster execution and higher
throughput compared to sequential processing.
• Parallel computing is used in various domains, including scientific simulations, data
analytics, image and signal processing, artificial intelligence, and computer graphics.
• Examples of parallel computing applications include weather forecasting, molecular
dynamics simulations, genome sequencing, deep learning training, and rendering complex
3D graphics.
Grid Computing :
• Grid computing is defined as a type of computing where it is constitutes a network of
computers that work together to perform tasks that may be difficult for a single machine to
handle. All the computers on that network work under the same umbrella and are termed as
a virtual super computer.
• The tasks they work on is of either high computing power and consist of large data sets.
• All communication between the computer systems in grid computing is done on the “data
grid”.
• The goal of grid computing is to solve more high computational problems in less time and
improve productivity.
• Grid computing is a distributed computing paradigm that manages the computational
resources of multiple networked computers or clusters to solve large-scale computational
problems.
• Grid computing involves the coordination of geographically dispersed resources to work on
a common task.
• Grid computing systems consist of multiple nodes or sites interconnected by high-speed
networks, such as the Internet or dedicated communication links. Each node in the grid can
contribute its computational power and resources to the collective pool, creating a
distributed computing infrastructure.
• Grid computing relies on middleware software to manage resource discovery, allocation,
scheduling, security, and communication within the grid. Grid middleware provides a set of
services and APIs (Application Programming Interfaces) that abstract the underlying
infrastructure and facilitate the development and execution of grid applications.
• It typically involves pooling together computing resources from multiple locations to solve
large-scale problems.
• In grid computing, resources such as processing power, storage, and software applications
are shared across geographically distributed sites, allowing organizations to leverage idle
resources and collaborate on complex tasks.
• Grid computing facilitates collaborative research and scientific discovery by enabling
researchers and organizations to share data, computational resources, and expertise across
institutional boundaries.
• Examples of grid computing projects include the Open Science Grid (OSG), European Grid
Infrastructure (EGI), Worldwide LHC Computing Grid (WLCG) for high-energy physics,
and various academic and industrial grid initiatives.
• Grid computing offers several benefits, including increased computational power,
scalability, fault tolerance, and cost-effectiveness.
Utility Computing
• Utility computing is defined as the type of computing where the service provider provides
the needed resources and services to the customer and charges them depending on the usage
of these resources as per requirement and demand, but not of a fixed rate.
• Utility computing involves the renting of resources such as hardware, software, etc.
depending on the demand and the requirement.
• The goal of utility computing is to increase the usage of resources and be more cost-
efficient.
Autonomous Computing
• Autonomous computing represents a shift towards more intelligent, adaptive, and self-
sufficient systems that can operate autonomously in dynamic and complex environments. By
reducing the need for human intervention and enabling systems to adapt and evolve on their
own, autonomous computing promises to improve efficiency, reliability, and security across
a wide range of applications and industries.
• Autonomous computing involves self-managing systems that can adapt, optimize, and heal
themselves without human intervention.
• This paradigm is often associated with self-driving systems and autonomous agents.
• Autonomous computing refers to a computing paradigm in which systems and applications
are designed to operate and manage themselves with minimal human intervention.
• Autonomous computing systems are capable of self-configuration, self-optimization, self-
healing, and self-protection, allowing them to adapt to changing conditions, handle failures,
and optimize performance autonomously.
• Autonomous computing systems can configure themselves automatically based on
predefined policies, environmental conditions, or user preferences. This includes tasks such
as resource allocation, network configuration, and software installation.
• Autonomous computing systems continuously monitor their performance and make
adjustments to optimize resource utilization, throughput, and efficiency. This may involve
dynamically adjusting parameters, tuning algorithms, or reallocating resources to meet
changing demands.
• Autonomous computing systems have built-in mechanisms to detect and recover from
failures or disruptions automatically. This includes fault detection, fault isolation, and fault
recovery techniques that enable systems to maintain availability and reliability in the face of
failures.
• Autonomous computing systems leverage machine learning and artificial intelligence (AI)
techniques to make decisions, learn from experience, and adapt to changing environments
autonomously. This enables systems to improve their performance over time and anticipate
future challenges.
• Autonomous computing systems rely on policy-based management to define rules,
constraints, and objectives that guide their behavior. Policies specify desired outcomes,
constraints, and thresholds, allowing systems to make autonomous decisions while adhering
to organizational policies and requirements.
• Autonomous computing has applications in various domains, including cloud computing,
data centers, IoT (Internet of Things), autonomous vehicles, robotics, and smart
infrastructure.
• Examples include self-driving cars, autonomous drones, self-healing networks, and
automated cloud management platforms.
Cloud Computing
• Cloud is defined as the usage of someone else’s server to host, process or store data.
• Cloud computing is defined as the type of computing where it is the delivery of on-demand
computing services over the internet on a pay-as-you-go basis. It is widely distributed,
network-based and used for storage.
• There type of cloud are public, private, hybrid and community and some cloud providers are
Google cloud, AWS, Microsoft Azure and IBM cloud.