unit 2 modified os
unit 2 modified os
Stack:
The process Stack contains the temporary data such as method/function parameters, return
address and local variables.
Heap
This is dynamically allocated memory to a process during its run time.
Text
This includes the current activity represented by the value of Program Counter and the
contents of the processor's registers.
Data
This section contains the global and static variables.
Program
A program is a piece of code which may be a single line or millions of lines.
A computer program is usually written by a computer programmer in a programming
language.
#include <stdio.h>
int main() {
printf("Hello, World! \n");
return 0;
}
A computer program is a collection of instructions that performs a specific task when
executed by a computer..
A part of a computer program that performs a well-defined task is known as an algorithm.
A collection of computer programs, libraries and related data are referred to as a software.
Process Life Cycle
When a process executes, it passes through different states. These stages may differ in
different operating systems, and the names of these states are also not standardized.
1
Start
This is the initial state when a process is first started/created.
Ready
The process is waiting to be assigned to a processor.
Ready processes are waiting to have the processor allocated to them by the operating system
so that they can run.
Process maycome into this state after Start state or while running it by but interrupted by
thescheduler to assign CPU to some other process.
Running
Once the process has been assigned to a processor by the OS scheduler, the process state
is set to running and the processor executes its instructions.
Waiting
Process moves into the waiting state if it needs to wait for a resource, such as waiting for
user input, or waiting for a file to become available.
Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system, it is
moved to the terminated state where it waits to be removed from main memory.
Process Control Block (PCB)
A Process Control Block is a data structure maintained by the Operating System for every
process.
The PCB is identified by an integer process ID (PID).
Process State
The current state of the process i.e., whether it is ready, running, waiting, or whatever.
Process privileges
This is required to allow/disallow access to system resources.
Process ID
Unique identification for each of the process in the operating system.
Pointer
A pointer to parent process.
Program Counter
Program Counter is a pointer to the address of the next instruction to be executed for this
process.
2
CPU registers
Various CPU registers where process need to be stored for execution for running state.
CPU Scheduling Information
Process priority and other scheduling information which is required to schedule the
process.
Memory management information
This includes the information of page table, memory limits, Segment table depending on
memory used by the operating system.
Accounting information
This includes the amount of CPU used for process execution, time limits, execution ID
etc.
IO status information
This includes a list of I/O devices allocated to the process.
Process Scheduling
The process scheduling is the activity of the process manager that handles the removal of
the running process from the CPU and the selection of another process on the basis of a
particular strategy.
Process scheduling is an essential part of a Multiprogramming operating systems. Such
operating systems allow more than one process to be loaded into the executable memory
at a time and the loaded process shares the CPU using time multiplexing.
Categories of Scheduling
There are two categories of scheduling:
1. Non-preemptive:
Here the resource can’t be taken from a process until the process completes execution.
The switching of resources occurs when the running process terminates and moves to a
waiting state.
2. Preemptive:
Here the OS allocates the resources to a process for a fixed amount of time.
During resource allocation, the process switches from running state to ready state or from
waiting state to ready state.
This switching occurs as the CPU may give priority to other processes and replace the process
with higher priority with the running process.
Process Scheduling Queues
The Operating System maintains the following important process scheduling queues −
Job queue − This queue keeps all the processes in the system.
Ready queue − This queue keeps a set of all processes residing in main memory, ready
and waiting to execute. A new process is always put in this queue.
Device queues − The processes which are blocked due to unavailability of an I/O device
constitute this queue
3
Two-State Process Model
Two-state process model refers to running and non-running states which are described below
−
Running
When a new process is created, it enters into the system as in the running state.
Not Running
Processes that are not running are kept in queue, waiting for their turn to execute.
Each entry in the queue is a pointer to a particular process.
Queue is implemented by usinglinked list.
When a process is interrupted, that process is transferred in the waiting queue.
If the process has completed or aborted, the process is discarded.
In either case, the dispatcher then selects a process from the queue to execute.
Schedulers
Schedulers are special system software which handle process scheduling in various ways.
Their main task is to select the jobs to be submitted into the system and to decide which
process to run.
Schedulers are of three types −
Long-Term Scheduler
Short-Term Scheduler
Medium-Term Scheduler
Long Term Scheduler
It is also called a job scheduler.
A long-term scheduler determines which programs are admitted to the system for
processing.
It selects processes from the queue and loads them into memory for execution.
Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs
Short Term Scheduler
It is also called as CPU scheduler.
Its main objective is to increase system performance in accordance with the chosen set of
criteria.
Medium Term Scheduler
Medium-term scheduling is a part of swapping.
It removes the processes from the memory.
It reduces the degree of multiprogramming.
4
Operations on processes
There are many operations that can be performed on processes. Some of these are
process creation, process preemption, process blocking, and process termination. These
are given in detail as follows −
Process Creation
Processes need to be created in the system for different operations. This can be done by the
following events −
User request for process creation
System initialization
Execution of a process creation system call by a running process
Batch job initialization
A process may be created by another process using fork(). The creating process is called
the parent process and the created process is the child process.
A child process can have only one parent but a parent process may have many children.
Both the parent and child processes have the same memory image, open files, and
environment strings. However, they have distinct address spaces.
A diagram that demonstrates process creation using fork() is as follows –
Process Preemption
An interrupt mechanism is used in preemption that suspends the process executing
currently and the next process to execute is determined by the short-term scheduler.
Preemption makes sure that all processes get some CPU time for execution.
A diagram that demonstrates process preemption is as follows –
Process Blocking
The process is blocked if it is waiting for some event to occur.
This event may be I/O as the I/O events are executed in the main memory and don't
require the processor.
.After the event is complete, the process again goes to the ready state.
A diagram that demonstrates process blocking is as follows –
5
Process Termination
After the process has completed the execution of its last instruction, it is terminated.
The resources held by a process are released after it is terminated.
A child process can be terminated by its parent process if its task is no longer relevant.
The child process sends its status information to the parent process before it terminates.
CPU scheduling Terminologies
Burst Time/Execution Time:
It is a time required by the process to complete execution.It is also called running time.
Arrival Time:
when a process enters in a ready state
Finish Time:
when process complete and exit from a system
Multiprogramming:
A number of programs which can be present in memory at the same time.
Jobs: It is a type of program without any kind of user interaction.
User: It is a kind of program having user interaction.
Process: It is the reference that is used for both job and user.
CPU/IO burst cycle: Characterizes process execution, which alternates between CPU
and I/O activity. CPU times are usually shorter than the time of I/O
CPU Scheduling Criteria
A CPU scheduling algorithm tries to maximize and minimize the following
Maximize
CPU utilization:
CPU utilization is the main task in which the operating system needs to make
sure that CPU remains as busy as possible.
It can range from 0 to 100 percent. However, for theRTOS, it can be range from 40 percent
for low-level and 90 percent for the high-level system.
Throughput:
The number of processes that finish their execution per unit time is known
Throughput.
So, when the CPU is busy executing the process, at that time, work is being done,
and the work completed per unit time is called Throughput.
Minimize
Waiting time:
Waiting time is an amount that specific process needs to wait in the ready queue.
Response time:
It is an amount to time in which the request was submitted until the first
response is produced.
Turnaround Time:
Turnaround time is an amount of time to execute a specific process.
It is thecalculation of the total time spent waiting to get into the memory, waiting in the queue
and, executing on the CPU. The period between the time of process submission to the
completiontime is the turnaround time.
6
Interval Timer
Timer interruption is a method that is closely related to preemption. When a certain process
getsthe CPU allocation, a timer may be set to a specified interval. Both timer interruption
andpreemption force a process to return the CPU before its CPU burst is complete
Types of CPU scheduling Algorithm
There are mainly six types of process scheduling algorithms
1. First Come First Serve (FCFS)
2. Shortest-Job-First (SJF) Scheduling
3. Shortest Remaining Time
4. Priority Scheduling
5. Round Robin Scheduling
6. Multilevel Queue Scheduling
Ready Queue
P1 P2 P3 P4 P5
Gant Chart
P1 P2 P3 P4 P5
L 0 3 4 9 11 15 R
C.T = R to L IN Gant Chart
T.A.T = C.T - A.T
Avg T.A.T = (3+3++7+8+11)/5 = 32/5 = 6.4
W.T = T.A.T – B.T
Avg W.T = (0+2+2+6+7)/5 = 17/5 = 3.4
R.T = L to R IN Gant Chart – A.T
7
EXAMPL 2:
Ready Queue
P0 P1 P2 P3 P4
Gant Chart
P0 P1 P2 P3 P4
L 0 3 9 13 18 20 R
C.T = R to L IN Gant Chart
T.A.T = C.T - A.T
Avg T.A.T = (3+7+9+12+12)/5 = 43/5 = 8.6
W.T = T.A.T – B.T
Avg W.T = (0+1+5+7+10)/5 = 23/5 = 4.6
R.T = L to R IN Gant Chart – A.T
EXAMPL 3:
Ready Queue
P0 P1 P2 P3
Gant Chart
P0 P1 P2 P3
L 0 5 8 16 22 R
C.T = R to L IN Gant Chart
T.A.T = C.T - A.T
Avg T.A.T = (5+7+14+19)/4 = 43/4 = 10.75
W.T = T.A.T – B.T
Avg W.T = (0+4+6+13)/4 = 23/4 = 5.75
R.T = L to R IN Gant Chart – A.T
8
EXAMPL 1:
Ready Queue
P1 P2 P3 P4 P5
Gant Chart
P1 P3 P4 P2 P5
L 0 6 7 10 14 21 R
Ready Queue
P1 P2 P3 P4 P5
Gant Chart
P0 P1 P4 P2 P3
L 0 3 9 11 15 20 R
Ready Queue
P1 P2 P3 P4 P5 P6
Gant Chart
P1 P2 P5 P3 P6 P4
L 0 4 9 16 18 21 24 R
Ready Queue
P1 P2 P3 P4 P5
Gant Chart
P2 P5 P1 P3 P4
L 0 1 6 16 18 19 R
Ready Queue
P1 P2 P3 P4 P5 P6
Gant Chart
P1 P2 P3 P2 P2 P5 P4 P6 P1
L 0 1 2 3 4 6 8 11 15 20 R
EXAMPLE 2:
Ready Queue
P0 P1 P2 P3 P4 P5
Gant Chart
P0 P1 P2 P3 P2 P2 P5 P4 P1 P0
L 0 1 2 3 4 5 6 7 9 13 17 R
11
EXAMPL 3:
Ready Queue
P0 P1 P2 P3
Gant Chart
P1 P2 P4 P1 P3
L 0 1 5 10 17 26 R
B.T
P.NO A.T B.T C.T T.A.T W.T R.T
Remaining
P1 0 4 2/0 8 8 4 0
P2 1 3 1/0 12 11 8 1
P3 2 5 3/1/0 19 17 12 2
P4 3 1 0 9 6 5 5
P5 4 2 0 11 7 5 5
P6 6 4 2/0 18 12 6 6
B.T
P.NO A.T B.T Remaining C.T T.A.T W.T R.T
P1 0 4 2/0 8 8 4 0
P2 1 5 3/1/0 18 17 12 1
P3 2 2 0 6 4 2 2
P4 3 1 0 9 6 5 5
P5 4 6 4/2/0 21 17 11 5
P6 6 3 1/0 19 13 10 7
Pipe:-
The pipe is a type of data channel that is unidirectional in nature. It means that the data in
this type of data channel can be moved in only a single direction at a time.
The two different types of pipes are ordinary pipes and named pipes.
o Ordinary pipes only allow one way communication.
14
o For two way communication, two pipes are required. Ordinary pipes have a parent
child relationship between the processes as the pipes can only be accessed by
processes that created or inherited them.
o Named pipes are more powerful than ordinary pipes and allow two way
communication.
o These pipes exist even after the processes using them have terminated. They need to
be explicitly deleted when not required anymore.
A diagram that demonstrates pipes are given as follows
Shared Memory:-
It can be referred to as a type of memory that can be used or accessed by multiple
processes simultaneously.
It is primarily used so that the processes can communicate with each other.
Therefore the shared memory is used by almost all POSIX and Windows operating systems as
well.
Message Queue:
In general, several different messages are allowed to read and write the data to the
message queue.
In the message queue, the messages are stored or stay in the queue unless their recipients
retrieve them.
In short, we can also say that the message queue is very helpful in inter-process
communication and used by all operating systems.
Message Passing:-
It is a type of mechanism that allows processes to synchronize and communicate with
each other.
However, by using the message passing, the processes can communicate with each other
without restoring the hared variables.
Usually, the inter-process communication mechanism provides two operations that are as
follows:
1. send (message)
2. received (message)
Direct Communication:-
In this type of communication process, usually, a link is created or established between
two communicating processes.
However, in every pair of communicating processes, only one link can exist.
Indirect Communication
Indirect communication can only exist or be established when processes share a common
mailbox, and each pair of these processes shares multiple communication links.
These shared links can be unidirectional or bi-directional.
15
i) Shared Memory Method
Shared memory is the memory that can be simultaneously accessed by multiple
processes.
This is done so that the processes can communicate with each other. All POSIX
systems, as well as Windows operating systems use shared memory.
Advantage of Shared Memory Model
Memory communication is faster on the shared memory model as compared to the message
passing model on the same machine.
Disadvantages of Shared Memory Model
i) All the processes that use the shared memory model need to make sure that they are
not writing to the same memory location.
ii) Shared memory model may create problems such as synchronization and memory
protection that need to be addressed.
ii) Messaging Passing Method
Establish a communication link (if a link already exists, no need to establish it again.)
Start exchanging messages using basic primitives.
We need at least two primitives:
– send(message, destination) or send(message)
– receive(message, host) or receive(message
17
Types of Semaphore
Counting Semaphore:
The semaphore S value is initialized to the number of resources present in the system.
Whenever a process wants to access the resource, it performs the wait()operation on the
semaphore and decrements the semaphore value by one.
When it releases the resource, it performs the signal() operation on the semaphore
and increments the semaphore value by one.
When the semaphore count goes to 0, it means the processes occupy all resources.
A process needs to use a resource when the semaphore count is 0. It executes the wait()
operation and gets blocked until the semaphore value becomes greater than 0.
Binary semaphore:
The value of a semaphore ranges between 0and 1.
It is similar to mutex lock, but mutex is a locking mechanism, whereas the semaphore is a
signaling mechanism.
In binary semaphore, if a process wants to access the resource, it performs the
wait() operation on the semaphore and decrements the value of the from 1 to 0.
When it releases the resource, it performs a signal() operation on the semaphore and
increments its value to 1.
Suppose the value of the semaphore is 0 and a process wants to access the resource. In
that case, it performs wait() operation and block itself till the current process utilizing the
resources releases the resource.
Advantages of Semaphore
Here are the following advantages of semaphore, such as:
It allows more than one thread to access the critical section.
Semaphores are machine-independent.
Semaphores are implemented in the machine-independent code of the microkernel.
They do not allow multiple processes to enter the critical section.
As there is busy and waiting in semaphore, there is never wastage of process time and
resources.
18
They are machine-independent, which should be run in the machine-independent code of
the micro kernel
They allow flexible management of resources.
Disadvantage of Semaphores
Semaphores also have some disadvantages, such as:
One of the biggest limitations of a semaphore is priority inversion.The operating
system has to keep track of all calls to wait and signal semaphore.
Their use is never enforced, but it is by convention only.
The Wait and Signal operations require to be executed in the correct order to avoid
deadlocks insemaphore.
Semaphore programming is a complex method, so there are chances of not
achieving mutual
exclusion.
It is also not a practical method for large scale use as their use leads to loss of
modularity.
Semaphore is more prone to programmer error , and it may cause deadlock or
violation of mutual exclusion due to programmer error
Producer Consumer problem
Let's examine the basic model that is sleep and wake. Assume that we have two system
calls as sleep and wake.
The process which calls sleep will get blocked while the process which calls will get
waked up.
There is a popular example called producer consumer problem which is the most
popular problem simulating sleep and wake mechanism.
The concept of sleep and wake is very simple. If the critical section is not empty then the
process will go and sleep.
It will be waked up by the other process which is currently executing inside the critical
section so that the process can get inside the critical section.
In producer consumer problem, let us say there are two processes, one process writes
something while the other process reads that.
The process which is writing something is called producer while the process which is
reading is called consumer.
In order to read and write, both of them are usinga buffer.
The code that simulates the sleep and wake mechanism in terms of providing the solution
to producer consumer problem is shown below.
1. #define N 100 //maximum slots in buffer
2. #define count=0 //items in the buffer
3. void producer (void)
4. {
5. int item;
6. while(True)
7. {
8. item = produce_item(); //producer produces an item
9. if(count == N) //if the buffer is full then the producer will sleep
10. Sleep();
11. insert_item (item); //the item is inserted into buffer
12. countcount=count+1;
13. if(count==1) //The producer will wake up the
14. //consumer if there is at least 1 item in the buffer
19
15. wake-up(consumer);
16. }
17. }
18.
19. void consumer (void)
20. {
21. int item;
22. while(True)
23. {
24. {
25. if(count == 0) //The consumer will sleep if the buffer is empty.
26. sleep();
27. item = remove_item();
28. countcount = count - 1;
29. if(count == N-1) //if there is at least one slot available in the buffer
30. //then the consumer will wake up producer
31. wake-up(producer);
32. consume_item(item); //the item is read by consumer.
33. }
34. }
35. }
Race Condition
A race condition is a situation that may occur inside a critical section. This happens
when the result of multiple thread execution in critical section differs according to the
order in which the threads execute.
Race conditions in critical sections can be avoided if the critical section is treated as an
atomic instruction.
Also, proper thread synchronization using locks or atomic variables can prevent race
conditions.
Critical Section
The critical section in a code segment where the shared variables can be accessed.
Atomic action is required in a critical section i.e. only one process can execute in its critical
section at a time.
All the other processes have to wait to execute in their critical sections.
The critical section is given as follows
do{
Entry Section
Critical Section
Exit Section
Remainder Section
} while (TRUE);
The exit section handles the exit from the critical section.
It releases the resources and also informs the other processes that critical section is free.
The critical section problem needs a solution to synchronise the different processes.
The solution to the critical section problem must satisfy the following conditions –
Mutual Exclusion
Mutual exclusion implies that only one process can be inside the critical section at any
time.
If any other processes require the critical section, they must wait until it is free.
20
Progresss
Progress means that if a process is not using the critical section, then it should not stop
any other process from accessing it.
In other words, any process can enter a critical section if it is free.
Bounded Waitings
Bounded waiting means that each process must have a limited waiting time.
It should not wait endlessly to access the critical section.
Dining Philosopher Problem Using Semaphores
The Dining Philosopher Problem states that K philosophers seated around a circular
table with one chopstick between each pair of philosophers.
There is one chopstick between each philosopher. A philosopher may eat if he can
pickup the two chopsticks adjacent to him.
One chopstick may be picked up by any one of its adjacent followers but not both.
This problem involves the allocation of limited resources to a group of processes in a
deadlock-free and starvation-free manner.
21