A Process Control Block
A Process Control Block
A process control block (PCB) stores data about the process, such as registers,
quantum, priority, and so on. The process table refers to an array of PCBs, which
means that it logically contains a PCB for each of the system’s active processes.
New: This state contains the processes which are ready to be loaded by
the operating system into the main memory.
Ready: This state contains the process which is both ready to be executed
and is currently in the main memory of the system. The operating system
brings the processes from secondary memory (hard disk) to main memory
(RAM). As these processes are present in the main memory and are waiting
to be assigned to the CPU, the state of these processes is known as Ready
state.
Running: This state contains the processes which are currently executed
by the CPU in our system. If there is a total x CPU in our system, then a
maximum number of running processes for a particular time is also x.
Block or wait: A process from its running state may transition to a block
or wait for state depending on the scheduling algorithm or because of the
internal behavior of the process (process explicitly wants to wait).
Termination: A process that completes its execution comes to its
termination state. All the contents of that process (Process control block)
will also be deleted by the operating system.
3. Process Priority:
Process priority is a numeric value that represents the priority of each
process. The lesser the value, the greater the priority of that process. This
priority is assigned at the time of the creation of the PCB and may depend
on many factors like the age of that process, the resources consumed, and
so on. The user can also externally assign a priority to the process.
4. Process Accounting Information:
This attribute gives the information of the resources used by that process
in its lifetime. For Example: CPU time connection time, etc.
5. Program Counter:
The program counter is a pointer that points to the next instruction in the
program to be executed. This attribute of PCB contains the address of the
next instruction to be executed in the process.
6. CPU registers:
A CPU register is a quickly accessible small-sized location available to the
CPU. These registers are stored in virtual memory (RAM).
7. PCB pointer:
This field contains the address of the next PCB, which is in ready state.
This helps the operating system to hierarchically maintain an easy control
flow between parent processes and child processes.
8. List of open files:
As the name suggests, it contains information on all the files that are used
by that process. This field is important as it helps the operating system to
close all the opened files at the termination state of the process.
9. Process I/O information:
In this field, the list of all the input/output devices which are required by
that process during its execution is mentioned.
PCBs are stored in the form of Linked List in memory as shown in the
figure.
Process table
Process table is a table that contains Process ID and the reference to the
corresponding PCB in memory. We can visualize the Process table as a dictionary
containing the list of all the processes running.
So, whenever a context switch occurs between processes the operating system
refers to the Process table to find the reference to the PCB with the help of the
corresponding Process ID.
Context Switching:
A context switching is a process that involves switching the CPU from one
process or task to another. It is the process of storing the state of a process
so that it can be restored and resume execution at a later point. This allows
multiple processes to share a single CPU and is an essential feature of a
multitasking operating system.
So, whenever context switching occurs in the code execution then the
current state of that process is stored temporarily in CPU registers. This
helps in the fast execution of the process by not wasting time-saving and
retrieving state information from the secondary memory (hard disk).
Context Switching stores the state of the current process, and the state includes
the information about the data that was stored in registers, the value of the
program counter, and the stack pointer. Context Switching is necessary because
if we directly pass the control of CPU to the new process without saving the state
of the old process and later if we want to resume the old process from where it
was stopped, we won't be able to do that as we don't know what the last instruction
the old process executed was. Context Switching overcomes this problem by
storing the state of the process.
Process Synchronization
Two mode of execution of process
1. Serial mode
2. Parallel mode
Process
Independent process: one’s process execution does not effect other process
Critical section
Race condition
Each of the processes has some sharable resources and some non-shareable
resources. The sharable resources can be shared among the cooperating
processes. The non-cooperating processes don’t need to share the resources.
When we synchronize the processes and the synchronization is not proper then
the race condition occurs. We can define the race condition as follows;
A race condition is a condition when there are many processes and every process
shares the data with each other and accessing the data concurrently, and the output
of execution depends on a particular sequence in which they share the data and
access.
We can easily synchronize the processes to prevent the race condition. To prevent
the race condition, we need to ensure that only one process can access the shared
data at a time. This is the main reason why we need to synchronize the processes.
What is the order of your execution depend on that your output is varying is race
condition.
Example-
Consider-
Two processes P1 and P2 are executing concurrently.
Both the processes share a common variable named “count” having initial
value = 5.
Process P1 tries to increment the value of count.
Process P2 tries to decrement the value of count.
Case-01:
Case-02:
Case-04:
Case-05:
Lock Variable
This is the simplest synchronization mechanism. This is a Software Mechanism
implemented in User mode. This is a busy waiting solution which can be used for
more than two processes.
In this mechanism, a Lock variable lock is used. Two values of lock can be
possible, either 0 or 1. Lock value 0 means that the critical section is vacant while
the lock value 1 means that it is occupied.
A process which wants to get into the critical section first checks the value of the
lock variable. If it is 0 then it sets the value of lock as 1 and enters into the critical
section, otherwise it waits.
The code of the mechanism looks like following.
Entry Section →
1. While (lock! = 0);
2. Lock = 1;
Critical Section
Critical Section
Exit Section →
1. Lock =0;
If we look at the Code, we find that there are three sections in the code. Entry
Section, Critical Section and the exit section.
Initially the value of lock variable is 0. The process which needs to get into
the critical section, enters into the entry section and checks the condition
provided in the while loop.
The process will wait infinitely until the value of lock is 1 (that is implied by
while loop). Since, at the very first time critical section is vacant hence the process
will enter the critical section by setting the lock variable as 1.
When the process exits from the critical section, then in the exit section, it
reassigns the value of lock as 0.
Every Synchronization mechanism is judged on the basis of four conditions.
1. Mutual Exclusion
2. Progress
3. Bounded Waiting
4. Portability
Out of the four parameters, Mutual Exclusion and Progress must be provided by
any solution. Let’s analyze this mechanism on the basis of the above mentioned
conditions.
Mutual Exclusion
The lock variable mechanism doesn't provide Mutual Exclusion in some of the
cases. This can be better described by looking at the code by the Operating
System point of view I.E. Assembly code of the program. Let's convert the Code
into the assembly language.
1. Load Lock, R0
2. CMP R0, #0
3. JNZ Step 1
4. Store #1, Lock
5. Store #0, Lock
Let us consider that we have two processes P1 and P2. The process P1 wants to
execute its critical section. P1 gets into the entry section. Since the value of lock
is 0 hence P1 changes its value from 0 to 1 and enters into the critical section.
Meanwhile, P1 is pre-empted by the CPU and P2 gets scheduled. Now there is no
other process in the critical section and the value of lock variable is 0. P2 also
wants to execute its critical section. It enters into the critical section by setting the
lock variable to 1.
Now, CPU changes P1's state from waiting to running. P1 is yet to finish its
critical section. P1 has already checked the value of lock variable and remembers
that its value was 0 when it previously checked it. Hence, it also enters into the
critical section without checking the updated value of lock variable.
Now, we got two processes in the critical section. According to the condition of
mutual exclusion, more than one process in the critical section must not be present
at the same time. Hence, the lock variable mechanism doesn't guarantee the
mutual exclusion.
The problem with the lock variable mechanism is that, at the same time, more
than one process can see the vacant tag and more than one process can enter in
the critical section. Hence, the lock variable doesn't provide the mutual exclusion
that's why it cannot be used in general.
Since, this method is failed at the basic step; hence, there is no need to talk about
the other conditions to be fulfilled.
Test Set Lock Mechanism
Modification in the assembly code
In lock variable mechanism, Sometimes Process reads the old value of lock
variable and enters the critical section. Due to this reason, more than one process
might get into critical section. However, the code shown in the part one of the
following section can be replaced with the code shown in the part two. This
doesn't affect the algorithm but, by doing this, we can manage to provide the
mutual exclusion to some extent but not completely.
In the updated version of code, the value of Lock is loaded into the local register
R0 and then value of lock is set to 1.
However, in step 3, the previous value of lock (that is now stored into R0) is
compared with 0. If this is 0 then the process will simply enter into the critical
section otherwise will wait by executing continuously in the loop.
The benefit of setting the lock immediately to 1 by the process itself is that, now
the process which enters into the critical section carries the updated value of lock
variable that is 1.
In the case when it gets pre-empted and scheduled again then also it will not enter
the critical section regardless of the current value of the lock variable as it already
knows what the updated value of lock variable is.
Section 1 Section 2
TSL Instruction
However, the solution provided in the above segment provides mutual exclusion
to some extent but it doesn't make sure that the mutual exclusion will always be
there. There is a possibility of having more than one process in the critical section.
What if the process gets pre-empted just after executing the first instruction of the
assembly code written in section 2? In that case, it will carry the old value of lock
variable with it and it will enter into the critical section regardless of knowing the
current value of lock variable. This may make the two processes present in the
critical section at the same time.
To get rid of this problem, we have to make sure that the pre-emption must not
take place just after loading the previous value of lock variable and before setting
it to 1. The problem can be solved if we can be able to merge the first two
instructions.
In order to address the problem, the operating system provides a special
instruction called Test Set Lock (TSL) instruction which simply loads the value
of lock variable into the local register R0 and sets it to 1 simultaneously
The process which executes the TSL first will enter into the critical section and
no other process after that can enter until the first process comes out. No process
can execute the critical section even in the case of pre-emption of the first process.
The assembly code of the solution will look like following.
1. TSL Lock, R0
2. CMP R0, #0
3. JNZ step 1
Let's examine TSL on the basis of the four conditions.
Mutual Exclusion
Mutual Exclusion is guaranteed in TSL mechanism since a process can never be
pre-empted just before setting the lock variable. Only one process can see the
lock variable as 0 at a particular time and that's why, the mutual exclusion is
guaranteed.
Progress
According to the definition of the progress, a process which doesn't want to enter
in the critical section should not stop other processes to get into it. In TSL
mechanism, a process will execute the TSL instruction only when it wants to get
into the critical section. The value of the lock will always be 0 if no process
doesn't want to enter into the critical section hence the progress is always
guaranteed in TSL.
Bounded Waiting
This synchronization mechanism does not guarantee bounded waiting.
This synchronization mechanism may cause a process to starve for the
CPU.
There might exist an unlucky process which when arrives to execute the
critical section finds it busy.
So, it keeps waiting in the while loop and eventually gets preempted.
When it gets rescheduled and comes to execute the critical section, it finds
another process executing the critical section.
So, again, it keeps waiting in the while loop and eventually gets preempted.
This may happen several times which causes that unlucky process to starve
for the CPU.
Architectural Neutrality
TSL doesn't provide Architectural Neutrality. It depends on the hardware
platform. The TSL instruction is provided by the operating system. Some
platforms might not provide that. Hence it is not Architectural natural.
Turn Variable or Strict Alternation Approach
Turn Variable-
Turn variable is a synchronization mechanism that provides
synchronization among two processes.
It uses a turn variable to provide the synchronization.
It is implemented as-
Working-
This synchronization mechanism works as explained in the following scenes-
Scene-01:
Process P0 arrives.
It executes the turn! =0 instruction.
Since turn value is set to 0, so it returns value 0 to the while loop.
The while loop condition breaks.
Process P0 enters the critical section and executes.
Now, even if process P0 gets pre-empted in the middle, process P1 cannot
enter the critical section.
Process P1 cannot enter unless process P0 completes and sets the turn value
to 1.
Scene-02:
Process P1 arrives.
It executes the turn! =1 instruction.
Since turn value is set to 0, so it returns value 1 to the while loop.
The returned value 1 does not break the while loop condition.
The process P1 is trapped inside an infinite while loop.
The while loop keeps the process P1 busy until the turn value becomes 1
and its condition breaks.
Scene-03:
Process P0 comes out of the critical section and sets the turn value to 1.
The while loop condition of process P1 breaks.
Now, the process P1 waiting for the critical section enters the critical
section and execute.
Now, even if process P1 gets pre-empted in the middle, process P0 cannot
enter the critical section.
Process P0 cannot enter unless process P1 completes and sets the turn value
to 0.
Characteristics-
Critical Section
This is a two process solution. Let us consider two cooperative processes P0 and
P1. The entry section and exit section are shown below. Initially, the value of
interested variables and turn variable is 0.
Initially process P0 arrives and wants to enter into the critical section. It sets its
interested variable to True (instruction line 3) and also sets turn to 0 (line number
4). Since the condition given in line number 5 is completely satisfied by P0
therefore it will enter in the critical section.
1. P0 → 1 2 3 4 5 CS
Meanwhile, Process P0 got pre-empted and process P1 got scheduled. P1 also
wants to enter in the critical section and executes instructions 1, 2, 3 and 4 of
entry section. On instruction 5, it got stuck since it doesn't satisfy the condition
(value of other interested variable is still true). Therefore it gets into the busy
waiting.
1. P1 → 1 2 3 4 5
P0 again got scheduled and finish the critical section by executing the instruction
no. 6 (setting interested variable to false). Now if P1 checks then it are going to
satisfy the condition since other process's interested variable becomes false. P1
will also get enter the critical section.
1. P0 → 6
2. P1→ 5 CS
Any of the process may enter in the critical section for multiple numbers of times.
Hence the procedure occurs in the cyclic order.
Mutual Exclusion
The method provides mutual exclusion for sure. In entry section, the while
condition involves the criteria for two variables therefore a process cannot enter
in the critical section until the other process is interested and the process is the
last one to update turn variable.
Progress
An uninterested process will never stop the other interested process from entering
in the critical section. If the other process is also interested then the process will
wait.
Bounded waiting
The interested variable mechanism failed because it was not providing bounded
waiting. However, in Peterson solution, A deadlock can never happen because
the process which first sets the turn variable will enter in the critical section for
sure. Therefore, if a process is pre-empted after executing line number 4 of the
entry section then it will definitely get into the critical section in its next chance.
Portability
This is the complete software solution and therefore it is portable on every
hardware.
Sleep and Wake
Let's examine the basic model that is sleep and wake. Assume that we have two
system calls as sleep and wake. The process which calls sleep will get blocked
while the process which calls will get waked up.
The concept of sleep and wake is very simple. If the critical section is not empty
then the process will go and sleep. It will be waked up by the other process which
is currently executing inside the critical section so that the process can get inside
the critical section.
In producer consumer problem, let us say there are two processes, one process
writes something while the other process reads that. The process which is writing
something is called producer while the process which is reading is
called consumer.
In order to read and write, both of them are using a buffer. The code that simulates
the sleep and wake mechanism in terms of providing the solution to producer
consumer problem is shown below.
Producer ( )
{
Int item;
While (True)
{
Item = produce item (); //producer produces an item
If (count == N) //if the buffer is full then the producer will sleep
Sleep ();
Insert _item (item); //the item is inserted into buffer
Count =count+1;
If (count= =1) //The producer will wake up the
//consumer if there is at least 1 item in the buffer
Wake-up (consumer);
}
}
Consumer ( )
{
Int item;
while(True)
{
{
If (count = = 0) //The consumer will sleep if the buffer is empty. | ?
Sleep ( );
Item = remove _item ();
Count = count - 1;
If (count = = N-1) //if there is at least one slot available in the buffer
//then the consumer will wake up producer
wake-up(producer);
Consume _item (item); //the item is read by consumer.
}
}
}
The producer produces the item and inserts it into the buffer. The value of the
global variable count got increased at each insertion. If the buffer is filled
completely and no slot is available then the producer will sleep, otherwise it keep
inserting.
The consumer will be waked up by the producer if there is at least 1 item available
in the buffer which is to be consumed. The producer will be waked up by the
consumer if there is at least one slot available in the buffer so that the producer
can write that.
Well, the problem arises in the case when the consumer got pre-empted just
before it was about to sleep. Now the consumer is neither sleeping nor consuming.
Since the producer is not aware of the fact that consumer is not actually sleeping
therefore it keep waking the consumer while the consumer is not responding since
it is not sleeping.
This leads to the wastage of system calls. When the consumer get scheduled
again, it will sleep because it was about to sleep when it was pre-empted.
The producer keep writing in the buffer and it got filled after some time. The
producer will also sleep at that time keeping in the mind that the consumer will
wake him up when there is a slot available in the buffer.
The consumer is also sleeping and not aware with the fact that the producer will
wake him up.
This is a kind of deadlock where neither producer nor consumer is active and
waiting for each other to wake them up. This is a serious problem which needs to
be addressed.
A flag bit can be used in order to get rid of this problem. The producer can set the
bit when it calls wake-up on the first time. When the consumer got scheduled, it
checks the bit.
The consumer will now get to know that the producer tried to wake him and
therefore it will not sleep and get into the ready state to consume whatever
produced by the producer.
This solution works for only one pair of producer and consumer, what if there are
n producers and n consumers. In that case, there is a need to maintain an integer
which can record how many wake-up calls have been made and how many
consumers need not sleep. This integer variable is called semaphore. We will
discuss more about semaphore later in detail.