Introduction to Concurrency Lec2
Introduction to Concurrency Lec2
E-OGODO (MRS)
Introduction To Concurrency
Principles of Concurrency
Concurrency Mechanisms
Concurrency vs Parallelism
Advantages of Concurrency
Concurrency in operating systems refers to the ability of an
operating system to handle multiple tasks or processes at the same
time. Concurrency means executing multiple tasks at the same time
but not necessarily simultaneously.
With the increasing demand for high performance computing,
concurrency has become a critical aspect of modern computing
systems.
Operating systems that support concurrency can execute multiple
tasks simultaneously, leading to better resource utilization,
improved responsiveness, and enhanced user experience.
Concurrency is essential in modern operating systems due to the
increasing demand for multitasking, real-time processing, and
parallel computing. It is used in a wide range of applications,
including web servers, databases, scientific simulations, and
multimedia processing.
However, concurrency also introduces new challenges.
A system is said to An application can be
be concurrent if it can parallel — but not
support two or more concurrent, which means
actions in progress at the that it processes multiple
same time. sub-tasks of a task in
While a system is said to multi-core CPU at the same
be parallel if it can support time.
two or more actions
executing simultaneously.
An application can be
concurrent — but not
parallel, which means that it
processes more than one
task at the same time, but
no two tasks are executing
at the same time instant.
The principles of concurrency in operating
systems are designed to ensure that multiple
processes or threads can execute efficiently and
effectively, without interfering with each other or
causing deadlock.
Interleaving
Synchronization
Mutual exclusion
Deadlock avoidance
Process or thread coordination
Resource allocation
Interleaving refers to the interleaved execution of
multiple processes or threads. The operating system
uses a scheduler to determine which process or thread
to execute at any given time. Interleaving allows for
efficient use of CPU resources and ensures that all
processes or threads get a fair share of CPU time.
Synchronization refers to the coordination of
multiple processes or threads to ensure that
they do not interfere with each other.
This is done through the use of
synchronization primitives such as locks,
semaphores, and monitors.
These primitives allow processes or threads
to coordinate access to shared resources
such as memory and I/O devices.
Mutual exclusion refers to the principle of ensuring that only
one process or thread can access a shared resource at a time.
This is typically implemented using locks or semaphores to
ensure that multiple processes or threads do not access a
shared resource simultaneously.
Deadlock is a situation in which two or more processes or
threads are waiting for each other to release a resource,
resulting in a deadlock. Operating systems use various
techniques such as resource allocation graphs and deadlock
prevention algorithms to avoid deadlock.
Processes or threads may need to coordinate their activities
to achieve a common goal. This is typically achieved using
synchronization primitives such as semaphores or message
passing mechanisms such as pipes or sockets.
Operating systems must allocate resources such as
memory, CPU time, and I/O devices to multiple
processes or threads in a fair and efficient manner.
This is typically achieved using scheduling
algorithms such as round-robin, priority-based, or
real-time scheduling.
These concurrency mechanisms are essential
for managing concurrency in operating
systems and are used to ensure safe and
efficient access to system resources.
Processes vs. Threads
Synchronization primitives
Scheduling algorithms
Message passing
Memory management
Interrupt handling
Processes vs. Threads − An operating system can
support concurrency using processes or threads.
A process is an instance of a program that can
execute independently, while a thread is a
lightweight process that shares the same
memory space as its parent process.
Synchronization primitives − Operating systems
provide synchronization primitives to coordinate
access to shared resources between multiple
processes or threads. Common synchronization
primitives include semaphores, mutexes, and
condition variables.
Scheduling algorithms − Operating systems
use scheduling algorithms to determine
which process or thread should execute next.
Common scheduling algorithms include
round-robin, priority-based, and real-time
scheduling.
Message passing − Message passing is a
mechanism used to communicate between
processes or threads. Messages can be sent
synchronously or asynchronously and can
include data, signals, or notifications.
Memory management − Operating systems
provide memory management mechanisms to
allocate and manage memory resources. These
mechanisms ensure that each process or thread
has its own memory space and can access
memory safely without interfering with other
processes or threads.
Interrupt handling − Interrupts are signals sent
by hardware devices to the operating system,
indicating that they require attention. The
operating system uses interrupt handling
mechanisms to stop the current process or
thread, save its state, and execute a specific
interrupt handler to handle the device's request.
Improved performance − Concurrency allows multiple tasks to
be executed simultaneously, improving the overall performance
of the system. By using multiple processors or threads, tasks can
be executed in parallel, reducing the overall processing time.
Resource utilization − Concurrency allows better utilization of
system resources, such as CPU, memory, and I/O devices. By
allowing multiple tasks to run simultaneously, the system can
make better use of its available resources.
Responsiveness − Concurrency can improve system
responsiveness by allowing multiple tasks to be executed
concurrently. This is particularly important in real-time systems
and interactive applications, such as gaming and multimedia.
Scalability − Concurrency can improve the scalability of the
system by allowing it to handle an increasing number of tasks
and users without degrading performance.
Fault tolerance − Concurrency can improve the fault tolerance of
the system by allowing tasks to be executed independently. If
one task fails, it does not affect the execution of other tasks.
Race conditions occur when the output of a system
depends on the order and timing of the events, which
leads to unpredictable behavior. Multiple processes
or threads accessing shared resources simultaneously
can cause race conditions.
Deadlocks occur when two or more processes or
threads are waiting for each other to release
resources, resulting in a circular wait. Deadlocks can
occur when multiple processes or threads compete
for exclusive access to shared resources.
Starvation occurs when a process or thread cannot
access the resource it needs to complete its task
because other processes or threads are hogging the
resource. This results in the process or thread being
stuck in a loop, unable to make progress.
Priority inversion occurs when a low-priority
process or thread holds a resource that a high-
priority process or thread needs, resulting in the
high-priority process or thread being blocked.
Memory consistency refers to the order in which
memory operations are performed by different
processes or threads. In a concurrent system,
memory consistency can be challenging to
ensure, leading to incorrect behavior.
Deadlock avoidance techniques can prevent
deadlocks from occurring, but they can lead to
inefficient use of resources or even starvation of
certain processes or threads.
Multiprocessing is the utilization of two or more central
processing units (CPU) in a single computer system. Its definition
can vary depending on the context, but generally it refers to a
system's ability to support multiple CPUs and its capacity to
distribute work among them.
Multithreading is the ability of a program or an operating system
to enable more than one user at a time without requiring
multiple copies of the program running on the computer.
Multithreading can also handle multiple requests from the same
user.
Multitasking involves working on two or more tasks
simultaneously, switching back and forth from one thing to
another, or performing a number of tasks in rapid succession.
Distributed processing means that a specific task can be broken
up into functions, and the functions are dispersed across two or
more interconnected processors. A distributed application is an
application for which the component application programs are
distributed between two or more interconnected processors.
Concurrency in operating systems refers to the
ability of an operating system to handle
multiple tasks or processes at the same time.
The principles of concurrency in operating
systems are designed to ensure that multiple
processes or threads can execute efficiently
and effectively, without interfering with each
other or causing deadlock.
These concurrency mechanisms are essential for
managing concurrency in operating systems and
are used to ensure safe and efficient access to
system resources.