0% found this document useful (0 votes)
155 views26 pages

A Process Control Block

A Process Control Block (PCB) is a data structure within operating systems that contains information about a process. It stores details like process state, priority, registers, and more. Each active process has a corresponding PCB. PCBs are stored in memory as a linked list, with process IDs and pointers to PCBs maintained in a process table for the OS to reference. The PCB allows the OS to manage and switch between processes efficiently.

Uploaded by

rambabu mahato
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
155 views26 pages

A Process Control Block

A Process Control Block (PCB) is a data structure within operating systems that contains information about a process. It stores details like process state, priority, registers, and more. Each active process has a corresponding PCB. PCBs are stored in memory as a linked list, with process IDs and pointers to PCBs maintained in a process table for the OS to reference. The PCB allows the OS to manage and switch between processes efficiently.

Uploaded by

rambabu mahato
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 26

Process control block (PCB)

A process control block (PCB) stores data about the process, such as registers,
quantum, priority, and so on. The process table refers to an array of PCBs, which
means that it logically contains a PCB for each of the system’s active processes.

What is the Process Control Block?


When the process is created by the operating system it creates a data structure to
store the information of that process. This is known as Process Control Block
(PCB).
Process Control block (PCB) is a data structure that stores information of a
process.
PCBs are stored in especially reserved memory for the operating system known
as kernel space.
A Process Control Block (PCB) refers to a data structure that keeps track of
information about a specific process. The CPU requires this information to
complete the job.
Each process has its own PCB (process control block) that identifies it.
It is also referred to as the context of the process.
During the procedure’s execution, PCB keeps track of all open devices.
Important Notes
 Each process’s PCB is stored in the main memory.
 Each process has only one PCB associated with it.
 All of the processes’ PCBs are listed in a linked list.

Role of Process Control Block


It's the job of the operating system to assign a CPU to a process as the process
doesn't need a CPU all the time. Let's take an example of the input/output process,
they are only used by the CPU when triggered.
The role of the process control block arises as an identification card for each
process. The Operating System doesn't know which process is which, until
Operating System refers through the PCB of every process.
For Example: there are MS word processes, pdf processes, printing processes,
and many background processes are running currently on the CPU. How will OS
identify and manage each process without knowing the identity of each process?
So, here PCB comes into play as a data structure to store information about each
process.
Therefore, whenever a user triggers a process (like print command), a process
control block (PCB) is created for that process in the operating system which is
used by the operating system to execute and manage the processes when the
operating system is free.

Structure of Process Control Block


The process control block contains many attributes such as process ID, process
state, process priority, accounting information, program counter, CPU registers`,
etc. for each process.
1. Process ID:
When a new process is created by the user, the operating system assigns a unique
ID i.e. a process-ID to that process. This ID helps the process to be distinguished
from other processes existing in the system.
The operating system has a limit on the maximum number of processes it is
capable of dealing with, let's say OS can handle at most N processes at a time.
So, process-ID will get the values from 0 to N-1.
 First process will be given ID0.
 Second process will be given ID 1.
 It continues till N-1.
2. Process State:
A process, from its creation to completion goes through different states.
Generally, a process may be present in one of the 5 states during its execution:

 New: This state contains the processes which are ready to be loaded by
the operating system into the main memory.
 Ready: This state contains the process which is both ready to be executed
and is currently in the main memory of the system. The operating system
brings the processes from secondary memory (hard disk) to main memory
(RAM). As these processes are present in the main memory and are waiting
to be assigned to the CPU, the state of these processes is known as Ready
state.
 Running: This state contains the processes which are currently executed
by the CPU in our system. If there is a total x CPU in our system, then a
maximum number of running processes for a particular time is also x.
 Block or wait: A process from its running state may transition to a block
or wait for state depending on the scheduling algorithm or because of the
internal behavior of the process (process explicitly wants to wait).
 Termination: A process that completes its execution comes to its
termination state. All the contents of that process (Process control block)
will also be deleted by the operating system.
3. Process Priority:
Process priority is a numeric value that represents the priority of each
process. The lesser the value, the greater the priority of that process. This
priority is assigned at the time of the creation of the PCB and may depend
on many factors like the age of that process, the resources consumed, and
so on. The user can also externally assign a priority to the process.
4. Process Accounting Information:
This attribute gives the information of the resources used by that process
in its lifetime. For Example: CPU time connection time, etc.
5. Program Counter:
The program counter is a pointer that points to the next instruction in the
program to be executed. This attribute of PCB contains the address of the
next instruction to be executed in the process.
6. CPU registers:
A CPU register is a quickly accessible small-sized location available to the
CPU. These registers are stored in virtual memory (RAM).
7. PCB pointer:
This field contains the address of the next PCB, which is in ready state.
This helps the operating system to hierarchically maintain an easy control
flow between parent processes and child processes.
8. List of open files:
As the name suggests, it contains information on all the files that are used
by that process. This field is important as it helps the operating system to
close all the opened files at the termination state of the process.
9. Process I/O information:
In this field, the list of all the input/output devices which are required by
that process during its execution is mentioned.

PCBs are stored in the form of Linked List in memory as shown in the
figure.
Process table

Process table is a table that contains Process ID and the reference to the
corresponding PCB in memory. We can visualize the Process table as a dictionary
containing the list of all the processes running.

So, whenever a context switch occurs between processes the operating system
refers to the Process table to find the reference to the PCB with the help of the
corresponding Process ID.

Context Switching:
A context switching is a process that involves switching the CPU from one
process or task to another. It is the process of storing the state of a process
so that it can be restored and resume execution at a later point. This allows
multiple processes to share a single CPU and is an essential feature of a
multitasking operating system.
So, whenever context switching occurs in the code execution then the
current state of that process is stored temporarily in CPU registers. This
helps in the fast execution of the process by not wasting time-saving and
retrieving state information from the secondary memory (hard disk).

Context Switching stores the state of the current process, and the state includes
the information about the data that was stored in registers, the value of the
program counter, and the stack pointer. Context Switching is necessary because
if we directly pass the control of CPU to the new process without saving the state
of the old process and later if we want to resume the old process from where it
was stopped, we won't be able to do that as we don't know what the last instruction
the old process executed was. Context Switching overcomes this problem by
storing the state of the process.

Advantage of Context Switching


 The main advantage of context switching is even if the system contains
only one CPU, it gives the user an illusion that the system has multiple
CPUs due to which multiple processes are being executed. The context
switching is so fast that the user won't even realize that the processes are
switched to and fro.
The Disadvantage of Context Switching
 Though the context switching time is very less, still during that time, the
CPU remains idle and does not do any fruitful work.
 Also, due to Context Switching, there is a frequent flush of TLB
(Translation Look aside Buffer) and Cache.

Process Synchronization
Two mode of execution of process
1. Serial mode
2. Parallel mode

Process

Co-operative process Independent process


Share the resource non sharable
Variable
Memory
Code
Resources (CPU, Scanner, printer)

Cooperative process: one process execution effects other process

Independent process: one’s process execution does not effect other process
Critical section

A critical section refers to a segment of code that is executed by multiple


concurrent threads or processes, and which accesses shared resources. These
resources may include shared memory, files, or other system resources that can
only be accessed by one thread or process at a time to avoid data inconsistency
or race conditions.
1. Once one thread or process has entered the critical section, all other
threads or processes must wait until the executing thread or process exits
the critical section. The purpose of synchronization mechanisms is to
ensure that only one thread or process can execute the critical section at a
time.
2. The concept of a critical section is central to synchronization in computer
systems, as it is necessary to ensure that multiple threads or processes can
execute concurrently without interfering with each other.
3. Various synchronization mechanisms such as semaphores, monitors, and
condition variables are used to implement critical sections and ensure that
shared resources are accessed in a mutually exclusive manner.
The use of critical sections in synchronization can be advantageous in
improving the performance of concurrent systems, as it allows multiple threads
or processes to work together without interfering with each other.
However, care must be taken in designing and implementing critical sections, as
incorrect synchronization can lead to race conditions, deadlocks.
Critical Section:
When more than one processes try to access the same code segment that
segment is known as the critical section. The critical section contains shared
variables or resources which are needed to be synchronized to maintain the
consistency of data variables.
In simple terms, a critical section is a group of instructions/statements or
regions of code that need to be executed atomically, such as accessing a
resource (file, input or output port, global data, etc.)
The critical section problem needs a solution to synchronise the different
processes. The solution to the critical section problem must satisfy the following
conditions −
 Mutual Exclusion
Mutual exclusion implies that only one process can be inside the critical section
at any time. If any other processes require the critical section, they must wait
until it is free.
 Progress
Progress means that if a process is not using the critical section, then it should
not stop any other process from accessing it. In other words, any process can
enter a critical section if it is free.
 Bounded Waiting
Bounded waiting means that each process must have a limited waiting time. It
should not wait endlessly to access the critical section.

Race condition
Each of the processes has some sharable resources and some non-shareable
resources. The sharable resources can be shared among the cooperating
processes. The non-cooperating processes don’t need to share the resources.
When we synchronize the processes and the synchronization is not proper then
the race condition occurs. We can define the race condition as follows;
A race condition is a condition when there are many processes and every process
shares the data with each other and accessing the data concurrently, and the output
of execution depends on a particular sequence in which they share the data and
access.
We can easily synchronize the processes to prevent the race condition. To prevent
the race condition, we need to ensure that only one process can access the shared
data at a time. This is the main reason why we need to synchronize the processes.
What is the order of your execution depend on that your output is varying is race
condition.

Example-

The following illustration shows how inconsistent results may be produced if


multiple processes execute concurrently without any synchronization.

Consider-
 Two processes P1 and P2 are executing concurrently.
 Both the processes share a common variable named “count” having initial
value = 5.
 Process P1 tries to increment the value of count.
 Process P2 tries to decrement the value of count.

In assembly language, the instructions of the processes may be written as-


Now, when these processes execute concurrently without synchronization,
different results may be produced.

Case-01:

The execution order of the instructions may be-


P1 (1), P1 (2), P1 (3), P2 (1), P2 (2), P2 (3)
In this case,
Final value of count = 5

Case-02:

The execution order of the instructions may be-


P2 (1), P2 (2), P2 (3), P1 (1), P1 (2), P1 (3)
In this case,
Final value of count = 5
Case-03:

The execution order of the instructions may be-


P1 (1), P2 (1), P2 (2), P2 (3), P1 (2), P1 (3)
In this case,
Final value of count = 6

Case-04:

The execution order of the instructions may be-


P2 (1), P1 (1), P1 (2), P1 (3), P2 (2), P2 (3)
In this case,
Final value of count = 4

Case-05:

The execution order of the instructions may be-


P1 (1), P1 (2), P2 (1), P2 (2), P1 (3), P2 (3)
In this case,
Final value of count = 4

It is clear from here that inconsistent results may be produced if multiple


processes execute concurrently without any synchronization.
Race Condition-

Race condition is a situation where-


 The final output produced depends on the execution order of instructions
of different processes.
 Several processes compete with each other.
The above example is a good illustration of race condition.

Lock Variable
This is the simplest synchronization mechanism. This is a Software Mechanism
implemented in User mode. This is a busy waiting solution which can be used for
more than two processes.
In this mechanism, a Lock variable lock is used. Two values of lock can be
possible, either 0 or 1. Lock value 0 means that the critical section is vacant while
the lock value 1 means that it is occupied.
A process which wants to get into the critical section first checks the value of the
lock variable. If it is 0 then it sets the value of lock as 1 and enters into the critical
section, otherwise it waits.
The code of the mechanism looks like following.
Entry Section →
1. While (lock! = 0);
2. Lock = 1;

Critical Section
Critical Section

Exit Section →
1. Lock =0;
If we look at the Code, we find that there are three sections in the code. Entry
Section, Critical Section and the exit section.
Initially the value of lock variable is 0. The process which needs to get into
the critical section, enters into the entry section and checks the condition
provided in the while loop.
The process will wait infinitely until the value of lock is 1 (that is implied by
while loop). Since, at the very first time critical section is vacant hence the process
will enter the critical section by setting the lock variable as 1.
When the process exits from the critical section, then in the exit section, it
reassigns the value of lock as 0.
Every Synchronization mechanism is judged on the basis of four conditions.
1. Mutual Exclusion
2. Progress
3. Bounded Waiting
4. Portability
Out of the four parameters, Mutual Exclusion and Progress must be provided by
any solution. Let’s analyze this mechanism on the basis of the above mentioned
conditions.
Mutual Exclusion
The lock variable mechanism doesn't provide Mutual Exclusion in some of the
cases. This can be better described by looking at the code by the Operating
System point of view I.E. Assembly code of the program. Let's convert the Code
into the assembly language.
1. Load Lock, R0
2. CMP R0, #0
3. JNZ Step 1
4. Store #1, Lock
5. Store #0, Lock
Let us consider that we have two processes P1 and P2. The process P1 wants to
execute its critical section. P1 gets into the entry section. Since the value of lock
is 0 hence P1 changes its value from 0 to 1 and enters into the critical section.
Meanwhile, P1 is pre-empted by the CPU and P2 gets scheduled. Now there is no
other process in the critical section and the value of lock variable is 0. P2 also
wants to execute its critical section. It enters into the critical section by setting the
lock variable to 1.
Now, CPU changes P1's state from waiting to running. P1 is yet to finish its
critical section. P1 has already checked the value of lock variable and remembers
that its value was 0 when it previously checked it. Hence, it also enters into the
critical section without checking the updated value of lock variable.
Now, we got two processes in the critical section. According to the condition of
mutual exclusion, more than one process in the critical section must not be present
at the same time. Hence, the lock variable mechanism doesn't guarantee the
mutual exclusion.
The problem with the lock variable mechanism is that, at the same time, more
than one process can see the vacant tag and more than one process can enter in
the critical section. Hence, the lock variable doesn't provide the mutual exclusion
that's why it cannot be used in general.
Since, this method is failed at the basic step; hence, there is no need to talk about
the other conditions to be fulfilled.
Test Set Lock Mechanism
Modification in the assembly code
In lock variable mechanism, Sometimes Process reads the old value of lock
variable and enters the critical section. Due to this reason, more than one process
might get into critical section. However, the code shown in the part one of the
following section can be replaced with the code shown in the part two. This
doesn't affect the algorithm but, by doing this, we can manage to provide the
mutual exclusion to some extent but not completely.
In the updated version of code, the value of Lock is loaded into the local register
R0 and then value of lock is set to 1.
However, in step 3, the previous value of lock (that is now stored into R0) is
compared with 0. If this is 0 then the process will simply enter into the critical
section otherwise will wait by executing continuously in the loop.
The benefit of setting the lock immediately to 1 by the process itself is that, now
the process which enters into the critical section carries the updated value of lock
variable that is 1.
In the case when it gets pre-empted and scheduled again then also it will not enter
the critical section regardless of the current value of the lock variable as it already
knows what the updated value of lock variable is.

Section 1 Section 2

1. Load Lock, R0 1. Load Lock, R0


2. CMP R0, #0 2. Store #1, Lock
3. JNZ step1 3. CMP R0, #0
4. store #1, Lock 4. JNZ step 1

TSL Instruction
However, the solution provided in the above segment provides mutual exclusion
to some extent but it doesn't make sure that the mutual exclusion will always be
there. There is a possibility of having more than one process in the critical section.
What if the process gets pre-empted just after executing the first instruction of the
assembly code written in section 2? In that case, it will carry the old value of lock
variable with it and it will enter into the critical section regardless of knowing the
current value of lock variable. This may make the two processes present in the
critical section at the same time.
To get rid of this problem, we have to make sure that the pre-emption must not
take place just after loading the previous value of lock variable and before setting
it to 1. The problem can be solved if we can be able to merge the first two
instructions.
In order to address the problem, the operating system provides a special
instruction called Test Set Lock (TSL) instruction which simply loads the value
of lock variable into the local register R0 and sets it to 1 simultaneously
The process which executes the TSL first will enter into the critical section and
no other process after that can enter until the first process comes out. No process
can execute the critical section even in the case of pre-emption of the first process.
The assembly code of the solution will look like following.
1. TSL Lock, R0
2. CMP R0, #0
3. JNZ step 1
Let's examine TSL on the basis of the four conditions.
 Mutual Exclusion
Mutual Exclusion is guaranteed in TSL mechanism since a process can never be
pre-empted just before setting the lock variable. Only one process can see the
lock variable as 0 at a particular time and that's why, the mutual exclusion is
guaranteed.
 Progress
According to the definition of the progress, a process which doesn't want to enter
in the critical section should not stop other processes to get into it. In TSL
mechanism, a process will execute the TSL instruction only when it wants to get
into the critical section. The value of the lock will always be 0 if no process
doesn't want to enter into the critical section hence the progress is always
guaranteed in TSL.
 Bounded Waiting
This synchronization mechanism does not guarantee bounded waiting.
 This synchronization mechanism may cause a process to starve for the
CPU.
 There might exist an unlucky process which when arrives to execute the
critical section finds it busy.
 So, it keeps waiting in the while loop and eventually gets preempted.
 When it gets rescheduled and comes to execute the critical section, it finds
another process executing the critical section.
 So, again, it keeps waiting in the while loop and eventually gets preempted.
 This may happen several times which causes that unlucky process to starve
for the CPU.
 Architectural Neutrality
TSL doesn't provide Architectural Neutrality. It depends on the hardware
platform. The TSL instruction is provided by the operating system. Some
platforms might not provide that. Hence it is not Architectural natural.
Turn Variable or Strict Alternation Approach
Turn Variable-
 Turn variable is a synchronization mechanism that provides
synchronization among two processes.
 It uses a turn variable to provide the synchronization.
It is implemented as-

Initially, turn value is set to 0.


 Turn value = 0 means it is the turn of process P0 to enter the critical section.
 Turn value = 1 means it is the turn of process P1 to enter the critical section.

Working-
This synchronization mechanism works as explained in the following scenes-
Scene-01:

 Process P0 arrives.
 It executes the turn! =0 instruction.
 Since turn value is set to 0, so it returns value 0 to the while loop.
 The while loop condition breaks.
 Process P0 enters the critical section and executes.
 Now, even if process P0 gets pre-empted in the middle, process P1 cannot
enter the critical section.
 Process P1 cannot enter unless process P0 completes and sets the turn value
to 1.
Scene-02:
Process P1 arrives.
 It executes the turn! =1 instruction.
 Since turn value is set to 0, so it returns value 1 to the while loop.
 The returned value 1 does not break the while loop condition.
 The process P1 is trapped inside an infinite while loop.
 The while loop keeps the process P1 busy until the turn value becomes 1
and its condition breaks.
Scene-03:
Process P0 comes out of the critical section and sets the turn value to 1.
 The while loop condition of process P1 breaks.
 Now, the process P1 waiting for the critical section enters the critical
section and execute.
 Now, even if process P1 gets pre-empted in the middle, process P0 cannot
enter the critical section.
 Process P0 cannot enter unless process P1 completes and sets the turn value
to 0.
Characteristics-

The characteristics of this synchronization mechanism are-


 It ensures mutual exclusion.
 It follows the strict alternation approach.
Strict Alternation Approach

In strict alternation approach,


 Processes have to compulsorily enter the critical section alternately
whether they want it or not.
 This is because if one process does not enter the critical section, then other
process will never get a chance to execute again.
 It does not guarantee progress since it follows strict alternation approach.
 It ensures bounded waiting since processes are executed turn wise one by
one and each process is guaranteed to get a chance.
 It ensures processes does not starve for the CPU.
 It is architectural neutral since it does not require any support from the
operating system.
 It is a busy waiting solution which keeps the CPU busy when the process
is actually waiting.
Peterson solution/Method
This is a software mechanism implemented at user mode. It is a busy waiting
solution can be implemented for only two processes. It uses two variables that are
turn variable and interested variable.
Till now, each of our solution is affected by one or the other problem. However,
the Peterson solution provides you all the necessary requirements such as Mutual
Exclusion, Progress, Bounded Waiting and Portability.
Analysis of Peterson Solution
Entry Section (Int process)
{
1. Int other;
2. Other = 1- process;
3. Interested [process] = TRUE;
4. Turn = process;
5.While (interested [other] =True && TURN=process);
}

Critical Section

Exit Section (Int process)


{
6. Interested [process] = FALSE;
}

This is a two process solution. Let us consider two cooperative processes P0 and
P1. The entry section and exit section are shown below. Initially, the value of
interested variables and turn variable is 0.
Initially process P0 arrives and wants to enter into the critical section. It sets its
interested variable to True (instruction line 3) and also sets turn to 0 (line number
4). Since the condition given in line number 5 is completely satisfied by P0
therefore it will enter in the critical section.
1. P0 → 1 2 3 4 5 CS
Meanwhile, Process P0 got pre-empted and process P1 got scheduled. P1 also
wants to enter in the critical section and executes instructions 1, 2, 3 and 4 of
entry section. On instruction 5, it got stuck since it doesn't satisfy the condition
(value of other interested variable is still true). Therefore it gets into the busy
waiting.
1. P1 → 1 2 3 4 5
P0 again got scheduled and finish the critical section by executing the instruction
no. 6 (setting interested variable to false). Now if P1 checks then it are going to
satisfy the condition since other process's interested variable becomes false. P1
will also get enter the critical section.
1. P0 → 6
2. P1→ 5 CS
Any of the process may enter in the critical section for multiple numbers of times.
Hence the procedure occurs in the cyclic order.
Mutual Exclusion
The method provides mutual exclusion for sure. In entry section, the while
condition involves the criteria for two variables therefore a process cannot enter
in the critical section until the other process is interested and the process is the
last one to update turn variable.
Progress
An uninterested process will never stop the other interested process from entering
in the critical section. If the other process is also interested then the process will
wait.
Bounded waiting
The interested variable mechanism failed because it was not providing bounded
waiting. However, in Peterson solution, A deadlock can never happen because
the process which first sets the turn variable will enter in the critical section for
sure. Therefore, if a process is pre-empted after executing line number 4 of the
entry section then it will definitely get into the critical section in its next chance.
Portability
This is the complete software solution and therefore it is portable on every
hardware.
Sleep and Wake

(Producer Consumer problem)

Let's examine the basic model that is sleep and wake. Assume that we have two
system calls as sleep and wake. The process which calls sleep will get blocked
while the process which calls will get waked up.

There is a popular example called producer consumer problem which is the


most popular problem simulating sleep and wake mechanism.

The concept of sleep and wake is very simple. If the critical section is not empty
then the process will go and sleep. It will be waked up by the other process which
is currently executing inside the critical section so that the process can get inside
the critical section.

In producer consumer problem, let us say there are two processes, one process
writes something while the other process reads that. The process which is writing
something is called producer while the process which is reading is
called consumer.

In order to read and write, both of them are using a buffer. The code that simulates
the sleep and wake mechanism in terms of providing the solution to producer
consumer problem is shown below.

Producer ( )
{
Int item;
While (True)
{
Item = produce item (); //producer produces an item
If (count == N) //if the buffer is full then the producer will sleep
Sleep ();
Insert _item (item); //the item is inserted into buffer
Count =count+1;
If (count= =1) //The producer will wake up the
//consumer if there is at least 1 item in the buffer
Wake-up (consumer);
}
}
Consumer ( )
{
Int item;
while(True)
{
{
If (count = = 0) //The consumer will sleep if the buffer is empty. | ?
Sleep ( );
Item = remove _item ();
Count = count - 1;
If (count = = N-1) //if there is at least one slot available in the buffer
//then the consumer will wake up producer
wake-up(producer);
Consume _item (item); //the item is read by consumer.
}
}
}

The producer produces the item and inserts it into the buffer. The value of the
global variable count got increased at each insertion. If the buffer is filled
completely and no slot is available then the producer will sleep, otherwise it keep
inserting.

On the consumer's end, the value of count got decreased by 1 at each


consumption. If the buffer is empty at any point of time then the consumer will
sleep otherwise, it keeps consuming the items and decreasing the value of count
by 1.

The consumer will be waked up by the producer if there is at least 1 item available
in the buffer which is to be consumed. The producer will be waked up by the
consumer if there is at least one slot available in the buffer so that the producer
can write that.

Well, the problem arises in the case when the consumer got pre-empted just
before it was about to sleep. Now the consumer is neither sleeping nor consuming.
Since the producer is not aware of the fact that consumer is not actually sleeping
therefore it keep waking the consumer while the consumer is not responding since
it is not sleeping.
This leads to the wastage of system calls. When the consumer get scheduled
again, it will sleep because it was about to sleep when it was pre-empted.

The producer keep writing in the buffer and it got filled after some time. The
producer will also sleep at that time keeping in the mind that the consumer will
wake him up when there is a slot available in the buffer.

The consumer is also sleeping and not aware with the fact that the producer will
wake him up.

This is a kind of deadlock where neither producer nor consumer is active and
waiting for each other to wake them up. This is a serious problem which needs to
be addressed.

Using a flag bit to get rid of this problem

A flag bit can be used in order to get rid of this problem. The producer can set the
bit when it calls wake-up on the first time. When the consumer got scheduled, it
checks the bit.

The consumer will now get to know that the producer tried to wake him and
therefore it will not sleep and get into the ready state to consume whatever
produced by the producer.

This solution works for only one pair of producer and consumer, what if there are
n producers and n consumers. In that case, there is a need to maintain an integer
which can record how many wake-up calls have been made and how many
consumers need not sleep. This integer variable is called semaphore. We will
discuss more about semaphore later in detail.

You might also like