0% found this document useful (0 votes)
2 views21 pages

unit 2 modified os

Uploaded by

vjsylakshmi
The document provides an overview of process management in operating systems, detailing the process concept, scheduling, synchronization, and the life cycle of processes. It explains the structure of a process, operations on processes, and various scheduling algorithms, including non-preemptive and preemptive scheduling. Additionally, it covers CPU scheduling criteria and terminologies, emphasizing the importance of efficient process management for system performance.

Copyright:

© All Rights Reserved

Available Formats

Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
2 views21 pages

unit 2 modified os

Uploaded by

vjsylakshmi
The document provides an overview of process management in operating systems, detailing the process concept, scheduling, synchronization, and the life cycle of processes. It explains the structure of a process, operations on processes, and various scheduling algorithms, including non-preemptive and preemptive scheduling. Additionally, it covers CPU scheduling criteria and terminologies, emphasizing the importance of efficient process management for system performance.

Copyright:

© All Rights Reserved

Available Formats

Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 21

UNIT- II

Process Management - Process concept, Process scheduling, Operations on processes, Inter-process


communication. Process Scheduling- Basic concepts, Scheduling criteria, Scheduling algorithms.
Process Synchronization-Background, The Critical section problem, Semaphores
Process
 A process is defined as an entity which represents the basic unit of work to be
implemented in the system.
 A process is basically a program in execution. The execution of a process must progress
in a sequential fashion.
 When a program is loaded into the memory and it becomes a process, it can be divided
into four sections ─ stack, heap, text and data

Stack:
The process Stack contains the temporary data such as method/function parameters, return
address and local variables.
Heap
This is dynamically allocated memory to a process during its run time.
Text
This includes the current activity represented by the value of Program Counter and the
contents of the processor's registers.
Data
This section contains the global and static variables.
Program
 A program is a piece of code which may be a single line or millions of lines.
 A computer program is usually written by a computer programmer in a programming
language.
#include <stdio.h>
int main() {
printf("Hello, World! \n");
return 0;
}
 A computer program is a collection of instructions that performs a specific task when
executed by a computer..
 A part of a computer program that performs a well-defined task is known as an algorithm.
 A collection of computer programs, libraries and related data are referred to as a software.
Process Life Cycle
 When a process executes, it passes through different states. These stages may differ in
different operating systems, and the names of these states are also not standardized.

1
Start
 This is the initial state when a process is first started/created.
Ready
 The process is waiting to be assigned to a processor.
 Ready processes are waiting to have the processor allocated to them by the operating system
so that they can run.
 Process maycome into this state after Start state or while running it by but interrupted by
thescheduler to assign CPU to some other process.
Running
 Once the process has been assigned to a processor by the OS scheduler, the process state
is set to running and the processor executes its instructions.
Waiting
 Process moves into the waiting state if it needs to wait for a resource, such as waiting for
user input, or waiting for a file to become available.
Terminated or Exit
 Once the process finishes its execution, or it is terminated by the operating system, it is
moved to the terminated state where it waits to be removed from main memory.
Process Control Block (PCB)
 A Process Control Block is a data structure maintained by the Operating System for every
process.
 The PCB is identified by an integer process ID (PID).

Process State
 The current state of the process i.e., whether it is ready, running, waiting, or whatever.
Process privileges
 This is required to allow/disallow access to system resources.
Process ID
 Unique identification for each of the process in the operating system.
Pointer
 A pointer to parent process.
Program Counter
 Program Counter is a pointer to the address of the next instruction to be executed for this
process.

2
CPU registers
 Various CPU registers where process need to be stored for execution for running state.
CPU Scheduling Information
 Process priority and other scheduling information which is required to schedule the
process.
Memory management information
 This includes the information of page table, memory limits, Segment table depending on
memory used by the operating system.
Accounting information
 This includes the amount of CPU used for process execution, time limits, execution ID
etc.
IO status information
 This includes a list of I/O devices allocated to the process.
Process Scheduling
 The process scheduling is the activity of the process manager that handles the removal of
the running process from the CPU and the selection of another process on the basis of a
particular strategy.
 Process scheduling is an essential part of a Multiprogramming operating systems. Such
operating systems allow more than one process to be loaded into the executable memory
at a time and the loaded process shares the CPU using time multiplexing.
Categories of Scheduling
There are two categories of scheduling:
1. Non-preemptive:
 Here the resource can’t be taken from a process until the process completes execution.
 The switching of resources occurs when the running process terminates and moves to a
waiting state.
2. Preemptive:
 Here the OS allocates the resources to a process for a fixed amount of time.
 During resource allocation, the process switches from running state to ready state or from
waiting state to ready state.
 This switching occurs as the CPU may give priority to other processes and replace the process
with higher priority with the running process.
Process Scheduling Queues
The Operating System maintains the following important process scheduling queues −
Job queue − This queue keeps all the processes in the system.
Ready queue − This queue keeps a set of all processes residing in main memory, ready
and waiting to execute. A new process is always put in this queue.
Device queues − The processes which are blocked due to unavailability of an I/O device
constitute this queue

3
Two-State Process Model
 Two-state process model refers to running and non-running states which are described below

Running
 When a new process is created, it enters into the system as in the running state.

Not Running
 Processes that are not running are kept in queue, waiting for their turn to execute.
 Each entry in the queue is a pointer to a particular process.
 Queue is implemented by usinglinked list.
 When a process is interrupted, that process is transferred in the waiting queue.
 If the process has completed or aborted, the process is discarded.
 In either case, the dispatcher then selects a process from the queue to execute.
Schedulers
Schedulers are special system software which handle process scheduling in various ways.
Their main task is to select the jobs to be submitted into the system and to decide which
process to run.
Schedulers are of three types −
Long-Term Scheduler
Short-Term Scheduler
Medium-Term Scheduler
Long Term Scheduler
 It is also called a job scheduler.
 A long-term scheduler determines which programs are admitted to the system for
processing.
 It selects processes from the queue and loads them into memory for execution.
 Process loads into the memory for CPU scheduling.
 The primary objective of the job scheduler is to provide a balanced mix of jobs
Short Term Scheduler
 It is also called as CPU scheduler.
 Its main objective is to increase system performance in accordance with the chosen set of
criteria.
Medium Term Scheduler
 Medium-term scheduling is a part of swapping.
 It removes the processes from the memory.
 It reduces the degree of multiprogramming.

4
Operations on processes
There are many operations that can be performed on processes. Some of these are
process creation, process preemption, process blocking, and process termination. These
are given in detail as follows −
Process Creation
 Processes need to be created in the system for different operations. This can be done by the
following events −
 User request for process creation
 System initialization
 Execution of a process creation system call by a running process
 Batch job initialization
 A process may be created by another process using fork(). The creating process is called
the parent process and the created process is the child process.
 A child process can have only one parent but a parent process may have many children.
 Both the parent and child processes have the same memory image, open files, and
environment strings. However, they have distinct address spaces.
A diagram that demonstrates process creation using fork() is as follows –

Process Preemption
 An interrupt mechanism is used in preemption that suspends the process executing
currently and the next process to execute is determined by the short-term scheduler.
 Preemption makes sure that all processes get some CPU time for execution.
A diagram that demonstrates process preemption is as follows –

Process Blocking
 The process is blocked if it is waiting for some event to occur.
 This event may be I/O as the I/O events are executed in the main memory and don't
require the processor.
 .After the event is complete, the process again goes to the ready state.
A diagram that demonstrates process blocking is as follows –

5
Process Termination
 After the process has completed the execution of its last instruction, it is terminated.
 The resources held by a process are released after it is terminated.
 A child process can be terminated by its parent process if its task is no longer relevant.
 The child process sends its status information to the parent process before it terminates.
CPU scheduling Terminologies
Burst Time/Execution Time:
 It is a time required by the process to complete execution.It is also called running time.
Arrival Time:
 when a process enters in a ready state
Finish Time:
 when process complete and exit from a system
Multiprogramming:
 A number of programs which can be present in memory at the same time.
Jobs: It is a type of program without any kind of user interaction.
User: It is a kind of program having user interaction.
Process: It is the reference that is used for both job and user.
CPU/IO burst cycle: Characterizes process execution, which alternates between CPU
and I/O activity. CPU times are usually shorter than the time of I/O
CPU Scheduling Criteria
A CPU scheduling algorithm tries to maximize and minimize the following

Maximize
CPU utilization:
 CPU utilization is the main task in which the operating system needs to make
sure that CPU remains as busy as possible.
 It can range from 0 to 100 percent. However, for theRTOS, it can be range from 40 percent
for low-level and 90 percent for the high-level system.
Throughput:
 The number of processes that finish their execution per unit time is known
Throughput.
 So, when the CPU is busy executing the process, at that time, work is being done,
and the work completed per unit time is called Throughput.
Minimize
Waiting time:
 Waiting time is an amount that specific process needs to wait in the ready queue.
Response time:
 It is an amount to time in which the request was submitted until the first
response is produced.
Turnaround Time:
 Turnaround time is an amount of time to execute a specific process.
 It is thecalculation of the total time spent waiting to get into the memory, waiting in the queue
and, executing on the CPU. The period between the time of process submission to the
completiontime is the turnaround time.

6
Interval Timer
 Timer interruption is a method that is closely related to preemption. When a certain process
getsthe CPU allocation, a timer may be set to a specified interval. Both timer interruption
andpreemption force a process to return the CPU before its CPU burst is complete
Types of CPU scheduling Algorithm
There are mainly six types of process scheduling algorithms
1. First Come First Serve (FCFS)
2. Shortest-Job-First (SJF) Scheduling
3. Shortest Remaining Time
4. Priority Scheduling
5. Round Robin Scheduling
6. Multilevel Queue Scheduling

1. First Come First Serve (FCFS)


 Jobs are executed on first come, first serve basis.
 It is a non-preemptive, pre-emptive scheduling algorithm.
 Easy to understand and implement.
 Its implementation is based on FIFO queue.
 Poor in performance as average wait time is high
EXAMPL 1:

P.NO A.T B.T C.T T.A.T W.T R.T


P1 0 3 3 3 0 0
P2 1 1 4 3 2 2
P3 2 5 9 7 2 2
P4 3 2 11 8 6 6
P5 4 4 15 11 7 7

Ready Queue
P1 P2 P3 P4 P5
Gant Chart
P1 P2 P3 P4 P5
L 0 3 4 9 11 15 R
C.T = R to L IN Gant Chart
T.A.T = C.T - A.T
Avg T.A.T = (3+3++7+8+11)/5 = 32/5 = 6.4
W.T = T.A.T – B.T
Avg W.T = (0+2+2+6+7)/5 = 17/5 = 3.4
R.T = L to R IN Gant Chart – A.T

7
EXAMPL 2:

P.NO A.T B.T C.T T.A.T W.T R.T


P0 0 3 3 3 0 0
P1 2 6 9 7 1 1
P2 4 4 13 9 5 5
P3 6 5 18 12 7 7
P4 8 2 20 12 10 10

Ready Queue
P0 P1 P2 P3 P4
Gant Chart
P0 P1 P2 P3 P4
L 0 3 9 13 18 20 R
C.T = R to L IN Gant Chart
T.A.T = C.T - A.T
Avg T.A.T = (3+7+9+12+12)/5 = 43/5 = 8.6
W.T = T.A.T – B.T
Avg W.T = (0+1+5+7+10)/5 = 23/5 = 4.6
R.T = L to R IN Gant Chart – A.T
EXAMPL 3:

P.NO A.T B.T C.T T.A.T W.T R.T


P0 0 5 5 5 0 0
P1 1 3 8 7 4 4
P2 2 8 16 14 6 6
P3 3 6 22 19 13 13

Ready Queue
P0 P1 P2 P3
Gant Chart
P0 P1 P2 P3
L 0 5 8 16 22 R
C.T = R to L IN Gant Chart
T.A.T = C.T - A.T
Avg T.A.T = (5+7+14+19)/4 = 43/4 = 10.75
W.T = T.A.T – B.T
Avg W.T = (0+4+6+13)/4 = 23/4 = 5.75
R.T = L to R IN Gant Chart – A.T

2. Shortest-Job-First (SJF) Scheduling:

 This is also known as shortest job first, or SJF


 This is a non-preemptive, pre-emptive scheduling algorithm.
 Best approach to minimize waiting time.
 Easy to implement in Batch systems where required CPU time is known in advance.
 Impossible to implement in interactive systems where required CPU time is not known.
 The processer should know in advance how much time process will take

8
EXAMPL 1:

P.NO A.T B.T C.T T.A.T W.T R.T


P1 0 6 6 6 0 0
P2 1 4 4 13 9 9
P3 2 1 7 5 4 4
P4 3 3 10 7 4 4
P5 4 7 21 17 10 10

Ready Queue
P1 P2 P3 P4 P5
Gant Chart
P1 P3 P4 P2 P5
L 0 6 7 10 14 21 R

C.T = R to L IN Gant Chart


T.A.T = C.T - A.T
Avg T.A.T = (6+13+5+7+17)/5 = 48/5 = 9.6
W.T = T.A.T – B.T
Avg W.T = (0+9+4+4+10)/5 = 27/5 = 5.4
R.T = L to R IN Gant Chart – A.T
EXAMPL 2:

P.NO A.T B.T C.T T.A.T W.T R.T


P0 0 3 3 3 0 0
P1 2 6 9 7 1 1
P2 4 4 15 11 7 7
P3 6 5 20 14 9 9
P4 8 2 11 3 1 1

Ready Queue
P1 P2 P3 P4 P5
Gant Chart
P0 P1 P4 P2 P3
L 0 3 9 11 15 20 R

C.T = R to L IN Gant Chart


T.A.T = C.T - A.T
Avg T.A.T = (3+7+11+14+3)/5 = 38/5 = 7.6
W.T = T.A.T – B.T
Avg W.T = (0+1+7+9+1)/5 = 18/5 = 3.6
R.T = L to R IN Gant Chart – A.T
3. Priority Scheduling:
 Priority scheduling is a non-preemptive algorithm and one of the most common
scheduling algorithms in batch systems.
 Each process is assigned a priority. Process with highest priority is to be executed first
and so on.
 Processes with same priority are executed on first come first served basis.
 Priority can be decided based on memory requirements, time requirements or any other
9
resource requirement.
EXAMPL 1:

P.NO Priority A.T B.T C.T T.A.T W.T R.T


P1 7 0 4 4 4 0 0
P2 1-H 2 5 9 7 2 2
P3 5 3 2 18 15 13 13
P4 11-L 5 3 24 19 16 16
P5 3 6 7 16 10 3 3
P6 9 8 3 21 13 10 10

Ready Queue
P1 P2 P3 P4 P5 P6
Gant Chart
P1 P2 P5 P3 P6 P4
L 0 4 9 16 18 21 24 R

C.T = R to L IN Gant Chart


T.A.T = C.T - A.T
Avg T.A.T = (4+9+18+24+16+21)/6 = 68/6 = 11.3
W.T = T.A.T – B.T
Avg W.T = (0+2+13+16+3+10)/6 = 44/6 = 7.3
R.T = L to R IN Gant Chart – A.T
EXAMPL 2:

P.NO Priority A.T B.T C.T T.A.T W.T R.T


P1 3 0 10 16 16 6 6
P2 1-H 0 1 1 1 0 0
P3 4 0 2 18 18 16 16
P4 5-L 0 1 19 19 18 18
P5 2 0 5 6 6 1 1

Ready Queue
P1 P2 P3 P4 P5
Gant Chart
P2 P5 P1 P3 P4
L 0 1 6 16 18 19 R

C.T = R to L IN Gant Chart


T.A.T = C.T - A.T
Avg T.A.T = (16+1+18+19+6)/5 = 60/5= 12
W.T = T.A.T – B.T
Avg W.T = (6+0+16+18+1)/5 = 4/5 = 8.2
R.T = L to R IN Gant Chart – A.T
4. Shortest Remaining Time:
 The full form of SRT is Shortest remaining time.
 It is also known as SJF preemptive scheduling.
 In this method, the process will be allocated to the task, which is closest to itS Completion.
 This method prevents a newer ready state process from holding the completion of an
10
older process..
 It is often used in batch environments where short jobs need to give preference
EXAMPLE 1:

P.NO A.T B.T C.T T.A.T W.T R.T


P1 0 6 20 20 14 0
P2 1 4 6 5 1 0
P3 2 1 3 1 0 0
P4 3 3 11 8 5 5
P5 4 2 8 4 2 2
P6 5 4 15 10 6 6

Ready Queue
P1 P2 P3 P4 P5 P6
Gant Chart
P1 P2 P3 P2 P2 P5 P4 P6 P1
L 0 1 2 3 4 6 8 11 15 20 R

C.T = R to L IN Gant Chart


T.A.T = C.T - A.T
Avg T.A.T = (20+5+1+8+4+10)/6 = 48/6 = 8
W.T = T.A.T – B.T
Avg W.T = (14+1+0+5+2+6)/6 = 27/6 = 4.667
R.T = L to R IN Gant Chart – A.T

EXAMPLE 2:

P.NO A.T B.T C.T T.A.T W.T R.T


P0 0 7 19 19 12 0
P1 1 5 13 12 7 0
P2 2 3 6 4 1 0
P3 3 1 4 1 0 0
P4 4 2 9 5 3 3
P5 5 1 7 2 1 1

Ready Queue
P0 P1 P2 P3 P4 P5
Gant Chart
P0 P1 P2 P3 P2 P2 P5 P4 P1 P0
L 0 1 2 3 4 5 6 7 9 13 17 R

C.T = R to L IN Gant Chart


T.A.T = C.T - A.T
Avg T.A.T = (19+12+4+1+5+2)/6 = 43/6 = 7.16
W.T = T.A.T – B.T
Avg W.T = (12+7+1+0+3+1)/6 = 24/6 = 4
R.T = L to R IN Gant Chart – A.T

11
EXAMPL 3:

P.NO A.T B.T C.T T.A.T W.T R.T


P1 0 8 17 17 9 0
P2 1 4 5 4 0 0
P3 2 9 26 24 15 15
P4 3 5 10 7 2 2

Ready Queue
P0 P1 P2 P3
Gant Chart
P1 P2 P4 P1 P3
L 0 1 5 10 17 26 R

C.T = R to L IN Gant Chart


T.A.T = C.T - A.T
Avg T.A.T = (17+4+24+7)/4 = 52/4 = 13
W.T = T.A.T – B.T
Avg W.T = (9+0+15+2)/4 = 26/4 = 6.5
R.T = L to R IN Gant Chart – A.T

5. Round Robin Scheduling:


 Round Robin is the preemptive process scheduling algorithm.
 Each process is provided a fix time to execute, it is called a quantum.
 Once a process is executed for a given time period, it is preempted and other process
executes for a given time period.
 Context switching is used to save states of preempted processes.
EXAMPLE 1:

B.T
P.NO A.T B.T C.T T.A.T W.T R.T
Remaining
P1 0 4 2/0 8 8 4 0
P2 1 3 1/0 12 11 8 1
P3 2 5 3/1/0 19 17 12 2
P4 3 1 0 9 6 5 5
P5 4 2 0 11 7 5 5
P6 6 4 2/0 18 12 6 6

Let Quantum Q=2


Ready Queue
P1 P2 P3 P1 P4 P5 P2 P6 P5 P6 P3
Gant Chart
P1 P2 P3 P1 P4 P5 P2 P6 P3 P6 P3
L 0 2 4 6 8 9 11 12 14 16 18 19 R

C.T = R to L IN Gant Chart


T.A.T = C.T - A.T
Avg T.A.T = (8+11+17+6+7+12)/6 = 61/6 = 10.16
W.T = T.A.T – B.T
12
Avg W.T = (4+8+12+5+5+6)/6 = 40/6 = 6.66
R.T = L to R IN Gant Chart – A.T
EXAMPLE 2:

B.T
P.NO A.T B.T Remaining C.T T.A.T W.T R.T
P1 0 4 2/0 8 8 4 0
P2 1 5 3/1/0 18 17 12 1
P3 2 2 0 6 4 2 2
P4 3 1 0 9 6 5 5
P5 4 6 4/2/0 21 17 11 5
P6 6 3 1/0 19 13 10 7

Let Quantum Q=2


Ready Queue
P1 P2 P3 P1 P4 P5 P2 P6 P5 P2 P6 P5
Gant Chart
P1 P2 P3 P1 P4 P5 P2 P6 P5 P2 P6 P5
L 0 2 4 6 8 9 11 13 15 17 18 19 21 R

C.T = R to L IN Gant Chart


T.A.T = C.T - A.T
Avg T.A.T = (8+17+4+6+17+13)/6 = 65/6 = 10.83
W.T = T.A.T – B.T
Avg W.T = (4+12+2+5+11+10)/6 = 44/6 = 7.33
R.T = L to R IN Gant Chart – A.T
Multiple-Level Queues Scheduling
 This algorithm separates the ready queue into various separate queues. In this method,
processes are assigned to a queue based on a specific property of the process, like the process
priority, size of the memory, etc.
 However, this is not an independent scheduling OS algorithm as it needs to use other types of
algorithms in order to schedule the jobs.
 Multiple-level queues are not an independent scheduling algorithm. They make use of other
existing algorithms to group and schedule jobs with common characteristics.
 Multiple queues are maintained for processes with common characteristics.
 Each queue can have its own scheduling algorithms.
 Priorities are assigned to each queue.
Purpose of a Scheduling algorithm
Here are the reasons for using a scheduling algorithm:
 The CPU uses scheduling to improve its efficiency.
 It helps you to allocate resources among competing processes.
 The maximum utilization of CPU can be obtained with multi-programming.
 The processes which are to be executed are in ready queue.
Inter Process Communication (IPC)
 Inter-process communication is used for exchanging useful information between
numerous threads in one or more processes (or programs)."
A process can be of two types:
 Independent process.
 Co-operating process.
13
An independent process is not affected by the execution of other processes while a co-
operating process can be affected by other executing processes.
 Inter-process communication (IPC) is a mechanism that allows processes to
communicate with each other and synchronize their actions. The communication
between these processes can be seen as a method of co-operation between them.
Processes can communicate with each other through both:
1. Shared Memory
2. Message passing
Role of Synchronization in Inter Process Communication
 It is one of the essential parts of inter process communication. Typically, this is provided by
interprocess communication control mechanisms, but sometimes it can also be controlled by
communication processes.
These are the following methods that used to provide the synchronization:
1. Mutual Exclusion
2. Semaphore
3. Barrier
4. Spinlock
Mutual Exclusion:-
 It is generally required that only one process thread can enter the critical section at a
time.
 This also helps in synchronization and creates a stable state to avoid the race condition.
Semaphore:-
 Semaphore is a type of variable that usually controls the access to the shared resources
by several processes. Semaphore is further divided into two types which are as follows:
1. Binary Semaphore
2. Counting Semaphore
Barrier:-
 A barrier typically not allows an individual process to proceed unless all the processes
does not reach it. It is used by many parallel languages, and collective routines impose
barriers.
Spinlock:-
 Spinlock is a type of lock as its name implies. The processes are trying to acquire the
spinlock waits or stays in a loop while checking that the lock is available or not.
 It is known as busy waiting because even though the process active, the process does not
perform any functional operation (or task).
Approaches to Interprocess Communication

Pipe:-
 The pipe is a type of data channel that is unidirectional in nature. It means that the data in
this type of data channel can be moved in only a single direction at a time.
 The two different types of pipes are ordinary pipes and named pipes.
o Ordinary pipes only allow one way communication.
14
o For two way communication, two pipes are required. Ordinary pipes have a parent
child relationship between the processes as the pipes can only be accessed by
processes that created or inherited them.
o Named pipes are more powerful than ordinary pipes and allow two way
communication.
o These pipes exist even after the processes using them have terminated. They need to
be explicitly deleted when not required anymore.
A diagram that demonstrates pipes are given as follows

Shared Memory:-
 It can be referred to as a type of memory that can be used or accessed by multiple
processes simultaneously.
 It is primarily used so that the processes can communicate with each other.
 Therefore the shared memory is used by almost all POSIX and Windows operating systems as
well.
Message Queue:

 In general, several different messages are allowed to read and write the data to the
message queue.
 In the message queue, the messages are stored or stay in the queue unless their recipients
retrieve them.
 In short, we can also say that the message queue is very helpful in inter-process
communication and used by all operating systems.
Message Passing:-
 It is a type of mechanism that allows processes to synchronize and communicate with
each other.
 However, by using the message passing, the processes can communicate with each other
without restoring the hared variables.
 Usually, the inter-process communication mechanism provides two operations that are as
follows:
1. send (message)
2. received (message)
Direct Communication:-
 In this type of communication process, usually, a link is created or established between
two communicating processes.
 However, in every pair of communicating processes, only one link can exist.
Indirect Communication
 Indirect communication can only exist or be established when processes share a common
mailbox, and each pair of these processes shares multiple communication links.
 These shared links can be unidirectional or bi-directional.

15
i) Shared Memory Method
 Shared memory is the memory that can be simultaneously accessed by multiple
processes.
 This is done so that the processes can communicate with each other. All POSIX
systems, as well as Windows operating systems use shared memory.
Advantage of Shared Memory Model
 Memory communication is faster on the shared memory model as compared to the message
passing model on the same machine.
Disadvantages of Shared Memory Model
i) All the processes that use the shared memory model need to make sure that they are
not writing to the same memory location.
ii) Shared memory model may create problems such as synchronization and memory
protection that need to be addressed.
ii) Messaging Passing Method
 Establish a communication link (if a link already exists, no need to establish it again.)
 Start exchanging messages using basic primitives.
 We need at least two primitives:
– send(message, destination) or send(message)
– receive(message, host) or receive(message

Advantage of Messaging Passing Model


 The message passing model is much easier to implement than the shared memory model.
Disadvantage of Messaging Passing Model
 The message passing model has slower communication than the shared memory model
because the connection setup takes time.

 Examples of IPC systems


1. Posix : uses shared memory method.
2. Mach : uses message passing
3. Windows XP : uses message passing using local procedural calls
Why we need interprocess communication?
 There are numerous reasons to use inter-process communication for sharing the data. Here are
some of the most important reasons that are given below:
 It helps to speedup modularity
 Computational
 Privilege separation
 Convenience
 Helps operating system to communicate with each other and synchronize their
actions as well.
Advantages of IPC:
i) Enables processes to communicate with each other and share resources,
leading to increased efficiency and flexibility.
16
ii) Facilitates coordination between multiple processes, leading to better overall
system performance.
iii) Allows for the creation of distributed systems that can span multiple
computers or networks.
iv) Can be used to implement various synchronization and communication
protocols, such as semaphores, pipes, and sockets.
Disadvantages of IPC:
i) Increases system complexity, making it harder to design, implement, and
debug.
ii) Can introduce security vulnerabilities, as processes may be able to access or
modify data belonging to other processes.
iii) Requires careful management of system resources, such as memory and
CPU time, to ensure that IPC operations do not degrade overall system
performance. Can lead to data inconsistencies if multiple processes try to access or
modify the same data at the same time
Semaphore
 Semaphore is simply a variable that is non-negative and shared between threads.
 A semaphore is a signaling mechanism, and another thread can signal a thread that is
waiting on a semaphore

 A semaphore uses two atomic operations,


1. Wait:
The wait operation decrements the value of its argument S if it is positive. If S is
negative or zero, then no operation is performed.
1. wait(S)
2. {
3. while (S<=0);
4. S--;
5. }
2. Signal for the process synchronization:
The signal operation increments the value of its argument S.
1. signal(S)
2. {
3. S++;
4. }
 A semaphore either allows or reject access to the resource, depending on how it is set up.
Use of Semaphore
 a single buffer, we can separate the 4 KB buffer into four buffers of 1 KB.
 Semaphore can be associated with these four buffers, allowing users and producers to
work on different buffers simultaneously.

17
Types of Semaphore
Counting Semaphore:
 The semaphore S value is initialized to the number of resources present in the system.
 Whenever a process wants to access the resource, it performs the wait()operation on the
semaphore and decrements the semaphore value by one.
 When it releases the resource, it performs the signal() operation on the semaphore
and increments the semaphore value by one.
 When the semaphore count goes to 0, it means the processes occupy all resources.
 A process needs to use a resource when the semaphore count is 0. It executes the wait()
operation and gets blocked until the semaphore value becomes greater than 0.

Binary semaphore:
 The value of a semaphore ranges between 0and 1.
 It is similar to mutex lock, but mutex is a locking mechanism, whereas the semaphore is a
signaling mechanism.
 In binary semaphore, if a process wants to access the resource, it performs the
wait() operation on the semaphore and decrements the value of the from 1 to 0.
 When it releases the resource, it performs a signal() operation on the semaphore and
increments its value to 1.
 Suppose the value of the semaphore is 0 and a process wants to access the resource. In
that case, it performs wait() operation and block itself till the current process utilizing the
resources releases the resource.

Advantages of Semaphore
Here are the following advantages of semaphore, such as:
 It allows more than one thread to access the critical section.
 Semaphores are machine-independent.
 Semaphores are implemented in the machine-independent code of the microkernel.
 They do not allow multiple processes to enter the critical section.
 As there is busy and waiting in semaphore, there is never wastage of process time and
resources.

18
 They are machine-independent, which should be run in the machine-independent code of
the micro kernel
 They allow flexible management of resources.
Disadvantage of Semaphores
Semaphores also have some disadvantages, such as:
 One of the biggest limitations of a semaphore is priority inversion.The operating
system has to keep track of all calls to wait and signal semaphore.
 Their use is never enforced, but it is by convention only.
 The Wait and Signal operations require to be executed in the correct order to avoid
deadlocks insemaphore.
 Semaphore programming is a complex method, so there are chances of not
achieving mutual
 exclusion.
 It is also not a practical method for large scale use as their use leads to loss of
modularity.
 Semaphore is more prone to programmer error , and it may cause deadlock or
violation of mutual exclusion due to programmer error
Producer Consumer problem
 Let's examine the basic model that is sleep and wake. Assume that we have two system
calls as sleep and wake.
 The process which calls sleep will get blocked while the process which calls will get
waked up.
 There is a popular example called producer consumer problem which is the most
popular problem simulating sleep and wake mechanism.
 The concept of sleep and wake is very simple. If the critical section is not empty then the
process will go and sleep.
 It will be waked up by the other process which is currently executing inside the critical
section so that the process can get inside the critical section.
 In producer consumer problem, let us say there are two processes, one process writes
something while the other process reads that.
 The process which is writing something is called producer while the process which is
reading is called consumer.
 In order to read and write, both of them are usinga buffer.
 The code that simulates the sleep and wake mechanism in terms of providing the solution
to producer consumer problem is shown below.
1. #define N 100 //maximum slots in buffer
2. #define count=0 //items in the buffer
3. void producer (void)
4. {
5. int item;
6. while(True)
7. {
8. item = produce_item(); //producer produces an item
9. if(count == N) //if the buffer is full then the producer will sleep
10. Sleep();
11. insert_item (item); //the item is inserted into buffer
12. countcount=count+1;
13. if(count==1) //The producer will wake up the
14. //consumer if there is at least 1 item in the buffer

19
15. wake-up(consumer);
16. }
17. }
18.
19. void consumer (void)
20. {
21. int item;
22. while(True)
23. {
24. {
25. if(count == 0) //The consumer will sleep if the buffer is empty.
26. sleep();
27. item = remove_item();
28. countcount = count - 1;
29. if(count == N-1) //if there is at least one slot available in the buffer
30. //then the consumer will wake up producer
31. wake-up(producer);
32. consume_item(item); //the item is read by consumer.
33. }
34. }
35. }
Race Condition
 A race condition is a situation that may occur inside a critical section. This happens
 when the result of multiple thread execution in critical section differs according to the
order in which the threads execute.
 Race conditions in critical sections can be avoided if the critical section is treated as an
atomic instruction.
 Also, proper thread synchronization using locks or atomic variables can prevent race
conditions.
Critical Section
 The critical section in a code segment where the shared variables can be accessed.
 Atomic action is required in a critical section i.e. only one process can execute in its critical
section at a time.
 All the other processes have to wait to execute in their critical sections.
 The critical section is given as follows
do{
Entry Section
Critical Section
Exit Section
Remainder Section
} while (TRUE);
 The exit section handles the exit from the critical section.
 It releases the resources and also informs the other processes that critical section is free.
 The critical section problem needs a solution to synchronise the different processes.
 The solution to the critical section problem must satisfy the following conditions –

Mutual Exclusion
 Mutual exclusion implies that only one process can be inside the critical section at any
time.
 If any other processes require the critical section, they must wait until it is free.
20
Progresss
 Progress means that if a process is not using the critical section, then it should not stop
any other process from accessing it.
 In other words, any process can enter a critical section if it is free.
Bounded Waitings
 Bounded waiting means that each process must have a limited waiting time.
 It should not wait endlessly to access the critical section.
Dining Philosopher Problem Using Semaphores
 The Dining Philosopher Problem states that K philosophers seated around a circular
table with one chopstick between each pair of philosophers.
 There is one chopstick between each philosopher. A philosopher may eat if he can
pickup the two chopsticks adjacent to him.
 One chopstick may be picked up by any one of its adjacent followers but not both.
 This problem involves the allocation of limited resources to a group of processes in a
deadlock-free and starvation-free manner.

Readers and Writers Problem:


 Suppose that a database is to be shared among several concurrent processes.
 Some of these processes may want only to read the database, whereas others may want
to update (that is, to read and write) the database.
 We distinguish between these two types of processes by referring to the former as
readers and to the latter as writers.
 Precisely in OS we call this situation as the readers-writers problem. Problem
parameters:
 One set of data is shared among a number of processes.
 Once a writer is ready, it performs its write. Only one writer may write at a time.
 If a process is writing, no other process can read it.
 If at least one reader is reading, no other process can write.
 Readers may not write and only read.

21

You might also like