0% found this document useful (0 votes)
23 views25 pages

Operating Systems

Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
0% found this document useful (0 votes)
23 views25 pages

Operating Systems

Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 25

2.

Process Management
PROCESS CONCEPTS:

The process concept includes the following:


1. Process
2. Process state
3. Process Control Block
4. Threads

Process: A process is a program in execution.


Process is not as same as program code but a lot more than it. A process is an 'active'
entity as opposed to program which is considered to be a 'passive' entity. Attributes held by
process include hardware state, memory, CPU etc. A process will need certain resources such
as CPU time, memory, files and I/O devices to accomplish its task. These resources are
allocated to the process either when it is created or while it is executing.

The below figure shows the structure of process in memory:

Process memory is divided into four sections for efficient working:

 The Text section is made up of the compiled program code, read in from non-
volatile storage when the program is launched.
 The Data section is made up the global and static variables, allocated and
initialized prior to executing the main.
 The Heap is used for the dynamic memory allocation, and is managed via calls to
new, delete, malloc, free, etc.
 The Stack is used for local variables. Space on the stack is reserved for local
variables when they are declared.
Process State:
As a process executes, it changes state. The process state defines the current activity of that
process.
A process may be in one of the following states:
 New: The process is being created.
 Ready: The process is waiting to be assigned to a processor.
 Running: Instructions are being executed.
 Waiting: The process is waiting for some event to occur such as an I/O completion
or reception of a signal.
 Terminated: The process has finished execution.
Note: Only one process can be running on any processor at any instant of time.

New  Ready: The operating system creates a process and prepare the process to be
executed, then the operating system moved the process into “Ready” queue.

Ready  Running: When it is time to select a process to run, the operating system selects one
of the jobs from the ready queue and move the process from ready state to running state.

Running  Terminated: When the execution of a process has completed, then the operating
system terminates that process from running state.

Running  Ready: When the time slot of the processor expired (or) it the processor received
any interrupt signal, then the operating system shifted running process to ready state. For
example, process P1 is executing by processor, in the mean time process P2 generate an
interrupt signal to the processor. Then the processor compare the priorities of process P1 and
P2, if P1> P2 then the processor continue the process P1. Otherwise the processor switched to
process P2, and the process P1 moved to ready state.

Running  Waiting: A process is put into the waiting state, if the process need an event occur,
or an I/O devices require. The operating system does not provide the I/O or event immediately
then the process moved to waiting state by the operating system.

Waiting  Ready: A process in the blocked state is moved to ready state when the event for
which it has been waiting occurs.
Process Control Block:
Each process is represented in the operating system by a Process Control Block (PCB). It is
also called a Task Control Block.

PCB serves as the repository for any information that may vary from process to process.

Fig. Process Control Block

The PCB contains information related to process such as:


 Process state: The state may be new, ready, running, waiting and terminated.
 Process ID and the parent process ID.
 Program counter: The counter indicates the address of the next instruction to be
executed for this process.

 CPU registers: The registers vary in number and type, depending on the computer
architecture. They include accumulators, index registers, stack pointers and general
purpose registers etc. Along with the program counter, this state information must be
saved when an interrupt occurs, to allow the process to be continued correctly
afterward.
 CPU-scheduling information: This information includes a process priority, pointers to
scheduling queues and any other scheduling parameters.
 Memory-management information: This information includes the base and limit
registers values, the page tables or the segment tables depending on the memory
system used by the operating system.
 Accounting information: This information includes the amount of CPU and real time
used, time limits, account numbers, job or process numbers and so on.

 I/O status information: This information includes the list of I/O devices allocated to the
process, a list of open files and so on.

Operations on Process:
We have discussed the two major operations Process Creation and Process Termination.
Process Creation:
 During the execution of a process in its life time, a process may create several new
processes.
 The creating process is called a parent process and the new processes are called
children process.
 Each of these new processes may create other processes forming a tree of processes
also known as process hierarchy.
 Operating system identifies processes according to process identifier (pid).
 Pid provides an unique integer number for each process in the system.

The below figure shows the process tree for the Linux OS that shows the name of each process
and its pid. In Linux process is called task.

 The init process always has a pid of 1. The init process serves as the root parent process
for all user processes.
 Once the system has booted, the init process can also create various user processes,
such as a web or print server, an ssh server etc. kthreadd and sshd are child processes
of init.
 The kthreadd process is responsible for creating additional processes that perform tasks
on behalf of the kernel.
 The sshd process is responsible for managing clients that connect to the system by using
secure shell (ssh).

 When a process creates a child process, that child process will need certain resources
such as CPU time, memory, files, I/O devices to accomplish its task.
 A child process may be able to obtain its resources directly from the operating system
or it may be constrained to a subset of the resources of the parent process.
 The parent may have to partition its resources among its children or it may be able to
share some resources such as memory or files among several of its children.
When a process creates a new process there exist two possibilities for execution:

1. The parent continues to execute concurrently with its children.


2. The parent waits until some or all of its children have terminated.

There are also two address-space possibilities for the new process:

1. The child process is a duplicate of the parent process (i.e) it has the same program and
data as the parent.
2. The child process has a new program loaded into it.

Process Termination: exit( )

 A process terminates when it finishes executing its final statement and asks the
operating system to delete it by using the exit( ) system call.
 The process may return a status value to its parent process via the wait( ) system call.
 All the resources of the process including physical and virtual memory, open files and
I/O buffers are de allocated by the operating system.
 A parent may terminate the execution of one of its children for a variety of reasons
such as:
1. The child has exceeded its usage of some of the resources that it has been
allocated.
2. The task assigned to the child is no longer required.
3. The parent is exiting and the operating system does not allow a child to continue if
its parent terminates.

Cascading Termination: If a parent process terminates either normally or abnormally then all
its children must also be terminated is referred as Cascading Termination. It is normally
initiated by operating system.
Cooperating Processes:
A process is said to be cooperating process, if it can affect or be affected by the other
processes executing in the system.
There are several reasons for providing an environment that allows process cooperation:
 Information sharing: Since several users may be interested in the same piece of
information (for instance, a shared file), we must provide an environment to allow
concurrent access to these types of resources.
 Computation speedup: If we want a particular task to run faster, we must break it into
subtasks, each of which will be executing in parallel with the others.
 Modularity: We may want to construct the system in a modular fashion, dividing the
system functions into separate processes.
 Convenience: Even an individual user may have many tasks to work on at one time. For
instance, a user may be editing, printing, and compiling in parallel.

Inter Process Communication (IPC):


Cooperating processes require an inter process communication (IPC) mechanism that
will allow them to exchange data and information. There are two fundamental models of
inter process communication shared memory and message passing.

Shared memory model: In the shared-memory model, a region of memory that is shared
by cooperating processes is established. Processes can then exchange information by reading
and writing data to the shared region.

Message –passing model: In the message-passing model, communication takes place by means
of messages exchanged between the cooperating processes.

The two communications models are contrasted in the following figure.


Shared-Memory Systems:
Inter process communication using shared memory requires communicating processes to
establish a region of shared memory. To illustrate the concept of cooperating processes, let’s
consider the producer – consumer problem, which is a common paradigm for cooperating
processes.

A producer process produces information that is consumed by a consumer process.


This illustrates inter process communication using shared memory. The following variables
reside in a region of memory shared by the producer and consumer processes.

The code for the producer process is shown in Figure 3.13, and the code for the consumer
process is shown in Figure 3.14. The producer process has a local variable next produced in
which the new item to be produced is stored. The consumer process has a local variable next
consumed in which the item to be consumed is stored.
Message-Passing Systems:
 Message passing provides a mechanism to allow processes to communicate and to
synchronize their actions without sharing the same address space.
 A message-passing facility provides at least two operations: send(message)
receive(message)
 If processes P and Q want to communicate, they must send messages to and receive
messages from each other: a communication link must exist between them.
 several methods for logically implementing a link and the send()/receive() operations:

1. Direct or indirect communication


2. Synchronous or asynchronous communication
3. Automatic or explicit buffering

Under direct communication, each process that wants to communicate must explicitly
name the recipient or sender of the communication.

In this scheme, the send() and receive() primitives are defined as:
1. send(P, message)—Send a message to process P.
2. receive(Q, message)—Receive a message from process Q

With indirect communication, the messages are sent to and received from mailboxes, or ports.
The send() and receive() primitives are defined as follows:
1. send(A, message)—Send a message to mailbox A.
2. receive(A, message)—Receive a message from mailbox A

Synchronization
Message passing may be either blocking or non blocking— also known as synchronous and
asynchronous.
1. Blocking send : The sending processes blocked until the messages received by the
receiving process or by the mailbox.
2. Non blocking send: The sending process sends the message and resumes operation.
3. Blocking receive: The receiver blocks until a message is available.
4. Non blocking receive :The receiver retrieves either a valid message or a null.

Buffering:
1. Zero capacity. The queue has a maximum length of zero;
2. Bounded capacity. The queue has finite length n;
3. Unbounded capacity. The queue’s length is potentially infinite;
THREADS:
 A thread is a lightweight process (LWP) basic unit of CPU utilization; it comprises a
thread ID, a program counter, a register set, and a stack.
 It shares with other threads belonging to the same process its code section, data
section, and other operating-system resources, such as open files and signals.
 A traditional (or heavyweight) process has a single thread of control.
 If a process has multiple threads of control, it can perform more than one
task at a time.

Thread States:
 Born State : A thread is just created.
 Ready state : The thread is waiting for CPU.
 Running : System assigns the processor to the thread.
 Sleep : A sleeping thread becomes ready after the designated sleep time expires.
 Dead : The Execution of the thread finished.

Multithreading:
 A process is divided into number of smaller tasks each task is called a Thread.
 Number of Threads with in a Process execute at a time is called Multithreading.
 If a program, is multithreaded, even when some portion of it is blocked, the whole
program is not blocked.
 The rest of the program continues working If multiple CPU’s are available.
 Multithreading gives best performance.
 If we have only a single thread, number of CPU’s available, No performance benefits
achieved.
 Process creation is heavy-weight while thread creation is light-weight.
 Typing, Formatting, Spell check, saving are threads.

15
Types of threads:

User Threads :
 Thread creation, scheduling, management happen in user space by Thread Library.
 User threads are faster to create and manage.
 If a user thread performs a system call, which blocks it, all the other threads in that
process one also automatically blocked, whole process is blocked.

Advantages:
 Thread switching does not require kernel mode privileges. User level thread can run on
any operating systems.
 User level threads are fast to create and manage.

Disadvantages:
 In a typical operating system, most system calls are blocking.
 Multithreaded application cannot take advantage of multi processing.

Kernel Threads:
 kernel creates, schedules, manages these threads .
 These threads are slower, manage.
 If one thread in a process blocked, over all process need not be blocked.

Advantages
 Kernel can simultaneously schedule multiple threads from the same process on multiple
processes.
 If one thread in a process is blocked, the Kernel can schedule another thread of the
same process.
 Kernel routines themselves can multithreaded.
Multithreading Models:
A relationship must exist between user threads and kernel threads. Three common
ways of establishing such a relationship are:

1. Many-to-One model
2. One-to-One model
3. Many-to-Many model.
Many-to-One model
 The many-to-one model maps many user-level threads to one kernel thread.
 Thread management is done by the thread library in user space, so it is efficient.
 The entire process will block if a thread makes a blocking system call.
 Only one thread can access the kernel at a time, multiple threads are unable to run in
parallel on multi core systems. Hence very few systems continue to use the model
because of its inability to take advantage of multiple processing cores.

One-to-One Model
 The one-to-one model maps each user thread to a kernel thread.
 It provides more concurrency than the many-to-one model by allowing another thread
to run when a thread makes a blocking system call.
 It also allows multiple threads to run in parallel on multiprocessors.
 The only drawback to this model is that creating a user thread requires creating the
corresponding kernel thread.
 Because the overhead of creating kernel threads can burden the performance of an
application, most implementations of this model restrict the number of threads
supported by the system.
 Linux, along with the family of Windows operating systems, implement the one-to-one
model.
Many-to-Many Model
The many-to-many model multiplexes many user-level threads to a smaller or equal
number of kernel threads. The number of kernel threads may be specific to either a particular
application or a particular machine.

Many-to-One Model One-to-One Model Many-to-Many Model

Process Scheduling:
The act of determining which process is in the ready state, and should be moved to the
running state is known as Process Scheduling.
The objective of multiprogramming is to have some process running at all times, to
maximize CPU utilization. The objective of time sharing is to switch the CPU among processes
so frequently that users can interact with each program while it is running.
To meet these objectives, the Process Scheduler selects an available process for
program execution on the CPU.

Process scheduling involves three things:


1. Scheduling Queues
2. Schedulers
3. Context Switch
1. Scheduling Queues:
There are several queues are implemented in operating system such as Job Queue, Ready
Queue, Device Queue.
 Job Queue: It consists of all processes in the system. As processes enter the system,
they are put into a job queue.
 Ready Queue: The processes that are residing in main memory and they are ready and
waiting to execute are kept on a list called the Ready Queue. Ready queue is generally
stored as a linked list. A ready-queue header contains pointers to the first and final PCBs
in the list. Each PCB includes a pointer field that points to the next PCB in the ready
queue.

 Device Queue: Each device has its own device queue. It contains the list of processes
waiting for a particular I/O device.
Consider the above Queuing Diagram:
 Two types of queues are present: the Ready Queue and a set of Device Queues. CPU
and I/O are the resources that serve the queues.
 A new process is initially put in the ready queue. It waits there until it is selected for
execution or dispatched.
Once the process is allocated the CPU and is executing, one of several events could occur:
 The process could issue an I/O request and then be placed in an I/O queue.
 The process could create a new child process and wait for the child’s termination.
 The process could be removed forcibly from the CPU, as a result of an interrupt and be
put back in the ready queue.

2. Schedulers:
A process migrates among the various scheduling queues throughout its lifetime. For
scheduling purpose, the operating system must select processes from these queues. The
selection process is carried out by the Scheduler.
There are three types of Schedulers are used:
1. Long Term Scheduler
2. Short Term Scheduler
3. Medium Term Scheduler
Long Term Scheduler (New to ready state):
 Initially processes are spooled to a mass-storage device (i.e Hard disk), where they are
kept for later execution.
 Long-term scheduler or job scheduler selects processes from this pool and loads them
into main memory for execution. (i.e. from Hard disk to Main memory).
 The long-term scheduler executes much less frequently, there may be minutes of time
between creation of one new process to another process.
 The long-term scheduler controls the degree of multiprogramming (the number of
processes in memory).
The processes can be described as two types:
1. I/O bound process is one that spends more of its time doing I/O than it spends doing
computations.
2. CPU Bound process using more of its time doing computations and generates I/O requests
infrequently.

The long-term scheduler selects a good process mix of I/O-bound and CPU-bound processes.
 If all processes are I/O bound, the ready queue will almost always be empty and the
CPU will remain idle for long time because I/O device processing takes a lot of time.
 If all processes are CPU bound, the I/O waiting queue will almost always be empty. I/O
devices will be idle and CPU is busy for most of the time.
 Thus if the system maintains the combination of CPU bound and I/O bound processes
then the system performance will be increased.
Short Term Scheduler (Ready to Running):
 Short-term scheduler or CPU scheduler selects from among the processes that are
ready to execute and allocates the CPU to one of them. (i.e. a process that resides in
main memory will be taken by CPU for execution).
 The short-term scheduler must select a new process for the CPU frequently.
 The short term scheduler must be very fast because of the short time between
executions of processes.
Medium Term Scheduler:
Medium Term Scheduler does two tasks:

1. Swapping: Medium-term scheduler removes a process from main memory and stores it into
the secondary storage. After some time, the process can be reintroduced into main memory
and its execution can be continued where it left off. This procedure is called Swapping.
2. Medium Term Scheduler moves a process from CPU to I/O waiting queue and I/O queue to
ready queue.

Fig. Addition of Medium term scheduler for queuing diagram


3. Context Switching:
 Switching the CPU to another process requires saving the state of the old process
and loading the saved state for the new process. This task is known as a Context
Switch.
 The context is represented in the PCB of the process. It includes the value of the CPU
registers, the process state and memory-management information.
 When a context switch occurs, the kernel saves the context of the old process in its PCB
and loads the saved context of the new process scheduled to run.
 Context-switch time is pure overhead, because the system does no useful work while
switching. Context switch time may be in few milliseconds.

CPU SCHEDULING:
 CPU scheduling is the basis of Multi-programmed operating systems. By switching the
CPU among processes, the operating system can make the computer more productive.
 In a single-processor system, only one process can run at a time. Others must wait until
the CPU is free and can be rescheduled.
 The CPU will sit idle and waiting for a process that needs an I/O operation to complete.
If the I/O operation completes then only the CPU will start executing the process. A lot
of CPU time has been wasted with this procedure.
 The objective of multiprogramming is to have some process running at all times to
maximize CPU utilization.
 When several processes are in main memory, if one processes is waiting for I/O then
the operating system takes the CPU away from that process and gives the CPU to
another process. Hence there will be no wastage of CPU time.

Concepts of CPU Scheduling:


1. CPU–I/O Burst Cycle
2. CPU Scheduler
3. Pre-emptive Scheduling
4. Dispatcher
1. CPU–I/O Burst Cycle:
 Process execution consists of a cycle of CPU execution and I/O wait.
 Process execution begins with a CPU burst. That is followed by an I/O burst.
Processes alternate between these two states.
 The final CPU burst ends with a system request to terminate execution.
 Hence the First cycle and Last cycle of execution must be CPU burst.
2. CPU Scheduler:
Whenever the CPU becomes idle, the operating system must select one of the processes in
the ready queue to be executed. The selection process is carried out by the Short-Term
Scheduler or CPU scheduler.

3. Pre-emptive Scheduling:
In preemptive scheduling, the CPU is allocated to the processes for a limited time whereas,
in Non-preemptive scheduling, the CPU is allocated to the process till it terminates or switches
to the waiting state.

CPU-scheduling decisions may take place under the following four cases:
1. When a process switches from the running state to the waiting state.
Example: as the result of an I/O request or an invocation of wait( ) for the termination
of a child process.
2. When a process switches from the running state to the ready state.
Example: when an interrupt occurs
3. When a process switches from the waiting state to the ready state.
Example: at completion of I/O.
4. When a process terminates.
For situations 2 and 3 are considered as Pre-emptive scheduling situations.

4. Dispatcher:
The dispatcher is the module that gives control of the CPU to the process selected by the
short-term scheduler. Dispatcher function involves:
1. Switching context
2. Switching to user mode
3. Jumping to the proper location in the user program to restart that program.
The dispatcher should be as fast as possible, since it is invoked during every process switch. The
time it takes for the dispatcher to stop one process and start another process running is known
as the Dispatch Latency.
SCHEDULING CRITERIA:
Different CPU-scheduling algorithms have different properties and the choice of a
particular algorithm may favor one class of processes over another.

Many criteria have been suggested for comparing CPU-scheduling algorithms:


 CPU utilization: CPU must be kept as busy as possible. CPU utilization can range from 0
to 100 percent. In a real system, it should range from 40 to 90 percent.
 Throughput: The number of processes that are completed per time unit.
 Turn-Around Time: It is the interval from the time of submission of a process to the
time of completion. Turnaround time is the sum of the periods spent waiting to get into
memory, waiting in the ready queue, executing on the CPU and doing I/O.

Turn Around Time = Exit Time – Arrival Time

 Waiting time: It is the amount of time that a process spends waiting in the ready queue.

Waiting Time = Turn Around Time – Burst Time

 Response time: It is the time from the submission of a request until the first response is
produced. Interactive systems use response time as its measure.

Note: It is desirable to maximize CPU utilization and Throughput and to minimize Turn- Around
Time, Waiting time and Response time.

CPU SCHEDULING ALGORITHMS:


CPU scheduling deals with the problem of deciding which of the processes in the ready queue is
to be allocated the CPU.

1.First-Come, First-Served Scheduling (FCFS):


In FCFS, the process that requests the CPU first is allocated the CPU first.
 FCFS scheduling algorithm is Non-preemptive.
 Once the CPU has been allocated to a process, it keeps the CPU until it releases the
CPU.
 FCFS can be implemented by using FIFO queues.
 When a process enters the ready queue, its PCB is linked onto the tail of the queue.
 When the CPU is free, it is allocated to the process at the head of the queue.
 The running process is then removed from the queue.
Example:1 Consider the following set of processes that arrive at time 0. The processes are
arrived in the order P1, P2, P3, with the length of the CPU burst given in milliseconds.

 Gantt Chart for FCFS is:

The average waiting time under the FCFS policy is often quite long.
 The waiting time is 0 milliseconds for process P1, 24 milliseconds for process P2 and 27
milliseconds for process P3.
 Thus, the average waiting time is (0 + 24 + 27)/3 = 17 milliseconds.

Convoy Effect in FCFS:


Convoy effect means, when a big process is executing in CPU, all the smaller processes must
have to wait until the big process execution completes. This will effect the performance of the
system.

Example:2 Let us consider same example above but with the processes arrived in the order P2,
P3, P1.

The processes coming at P2, P3, P1 the average waiting time (6 + 0 + 3)/3 = 3 milliseconds
whereas the processes are came in the order P1, P2, P3 the average waiting time is 17
milliseconds.

Disadvantage of FCFS:
FCFS scheduling algorithm is Non-preemptive, it allows one process to keep CPU for long time.
Hence it is not suitable for time sharing systems.
Shortest-Job-First Scheduling (SJF)
SJF algorithm is defined as “when the CPU is available, it is assigned to the process that
has the smallest next CPU burst”. If the next CPU bursts of two processes are the same, FCFS
scheduling is used between two processes.
SJF is also called as Shortest-Next CPU-Burst algorithm, because scheduling depends on the
length of the next CPU burst of a process, rather than its total length.

Example: Consider the following processes and CPU burst in milliseconds:

Gantt Chart of SJF algorithm:

The waiting time is 3 milliseconds for process P1


16 milliseconds for process P2,
9 milliseconds for process P3,
And 0 milliseconds for process P4.

Thus, the average waiting time is (3 + 16+9+ 0)/4 = 7 milliseconds.

Shortest Remaining Time First Scheduling (SRTF)

 SRTF is the pre-emptive SJF algorithm.


 A new process arrives at the ready queue, while a previous process is still executing.
 The next CPU burst of the newly arrived process may be shorter than the currently
executing process.
 SRTF will preempt the currently executing process and executes the shortest job.

Consider the four processes with arrival times and burst times in milliseconds:

Process(ms) Arrival time Burst Time


P1 0 8
P2 1 4
P3 2 9
P4 3 5
Gantt Chart for SRTF:

 Process P1 is started at time 0, since it is the only process in the queue.


 Process P2 arrives at time 1. The remaining time for process P1 (7 milliseconds) is larger
than the time required by process P2 (4 milliseconds), so process P1 is preempted and
process P2 is scheduled.
 The average waiting time = 26/4 = 6.5 milliseconds.

Round-Robin Scheduling (RR):


Round-Robin (RR) scheduling algorithm is designed especially for Timesharing systems.
 RR is similar to FCFS scheduling, but preemption is added to enable the system to switch
between processes.
 A small unit of time called a Time Quantum or Time Slice is defined. A time quantum is
generally from 10 to 100 milliseconds in length.
 The ready queue is treated as a Circular queue. New processes are added to the tail of
the ready queue.
 The CPU scheduler goes around the ready queue by allocating the CPU to each process
for a time interval of up to 1 time quantum and dispatches the process.
 If a process CPU burst exceeds 1 time quantum, that process is preempted and is put
back in the ready queue.

In RR scheduling one of two things will then happen.


1. The process may have a CPU burst of less than 1 time quantum. The process itself will
release the CPU voluntarily. The scheduler will then proceed to the next process in the
ready queue.
2. If the CPU burst of the currently running process is longer than 1 time quantum, the
timer will go off and will cause an interrupt to the operating system. A context switch
will be executed and the process will be put at the tail of the ready queue. The CPU
scheduler will then select the next process in the ready queue.

Example:
Consider the following set of processes that arrive at time 0 and the processes are arrived in
the order P1, P2, P3 and Time Quanta=4.

Process Burst Time


P1 24
P2 3
P3 3
Gantt chart of Round Robin Scheduling:

If we use a time quantum of 4 milliseconds, then process P1 gets the first 4 milliseconds.
 Since it requires another 20 milliseconds, it is preempted after the first time quantum
and the CPU is given to the next process in the queue, process P2.
 CPU burst of Process P2 is 3, so it does not need 4 milliseconds then it quits before its
time quantum expires. The CPU is then given to the next process P3.
 Once each process has received 1 time quantum, the CPU is returned to process P1 for
an additional time quantum.

The average waiting time under the RR policy is often long.


 P1 waits for 6 milliseconds (10 - 4), P2 waits for 4 milliseconds and P3 waits for 7
milliseconds. Thus, the average waiting time is 17/3 = 5.66 milliseconds.

Priority Scheduling:
In this, each process is associated with a priority and CPU is allocated to a process
which is having higher priority. If two processes priorities are equal, the algorithm selects
the first arrived process out of these two processes ( i.e., FCFS ). The priorities are
ranging from 0 to a maximum number. The lowest number represents highest priority.

Ex : Consider the following set of processes, assumed to have arrived at time 0, in the order
P1, P2 ….. P5 with the length of the CPU-burst time given in milliseconds:
Process Burst Time(in m.sec) Priority
P1 10 2
P2 1 0
P3 2 4
P4 1 5
P5 5 1
Gantt Chart:

P2 P5 P1 P3 P4

0 1 6 16 18 19
Waiting time for P1 = 6 m.sec
Waiting time for P2 = 0 m.sec
Waiting time for P3 = 16 m.sec
Waiting time for P4 = 18 m.sec
Waiting time for P5 = 1 m.sec
Average waiting time: (6 + 0 + 16 + 18 + 1)/5 = 41/5 = 8.2 m.sec
Turnaround time for P1 = 16 m.sec
Turnaround time for P2 = 1 m.sec
Turnaround time for P3 = 18 m.sec
Turnaround time for P4 = 19 m.sec
Turnaround time for P5 = 6 m.sec
Average Turnaround time = (16 + 1 + 18 + 19 + 6)/5 = 60/5 = 12 m.sec

Priority Scheduling may be either preemptive or nonpreemptive. The above


example
is nonpreemptive. In preemptive scheduling algorithm, the processes may come into
ready queue with some priority while other process is running. The operating system
compares arrival of newly arrived process with the priority of the currently executing
process. If new process is having higher priority, the currently executing process is
preempted and placed in the ready queue and CPU is now allotted to the newly arrived
process.

Example for Preemptive Priority Scheduling:

Process Burst Time(in m.sec) Priority Arrival Time


P1 10 3 2
P2 1 1 0
P3 2 3 3
P4 1 2 4
P5 1 2 1

P2 P5 P1 P4 P1 P3

0 1 2 4 5 13 15
Waiting time for P1 = 1 m.sec
Waiting time for P2 = 0 m.sec
Waiting time for P3 = 10 m.sec
Waiting time for P4 = 0 m.sec
Waiting time for P5 = 0 m.sec
Average waiting time: (1 + 0 + 10 + 0 + 0)/5 = 11/5 = 2.2 m.sec
Turnaround time for P1 = 11 m.sec
Turnaround time for P2 = 1 m.sec
Turnaround time for P3 = 12 m.sec
Turnaround time for P4 = 1 m.sec
Turnaround time for P5 = 1 m.sec
Average Turnaround time = (11 + 1 + 12 + 1 + 1)/5 = 26/5 = 5.2 m.sec
Starvation: A process in the ready queue and waits for CPU for a long period of time is said to
be Indefinitely Blocked or Starvation. In the priority scheduling algorithm, a process with least
priority is not served and put it in the ready queue and waits for the CPU is said to be
indefinitely blocked.
Aging: Aging is a solution to the starvation problem. This technique gradually increases the
priority of processes that wait in the system for a long period of time.

Multilevel Queue Scheduling:


This scheduling algorithm is used in the situation in which the processes are easily
categorized into different groups such as foreground (interactive) processes and background
(batch) processes. These processes have different response times and different
requirements. We divide our processes into different categories and place those in different
queues. Now, such queue contains similar type of processes and each queue is having its
own scheduling algorithm and all the processes in that are executed according to that
algorithm. Here, each queue is having its own priority and all queues are executed according
to priority.
A lower priority queue is executed if all the higher priority queues are empty. While
executing lower priority queue process, a new process enters into the system and which
belongs to higher priority queue, then preemption occurs i.e., the currently executing lower
priority queue is preempted and the new process in the higher priority queue will get
a chance.
The alternative priority of the above queues is time sharing i.e., a time slice is
defined and each queue is executed for time slice and preempted. The following diagram
shows the multilevel queue scheduling:
Multilevel Feedback Queue Scheduling:
This allows a process to between queues. The idea is separate processes with
different CPU burst characteristics. If a process uses too much CPU time, it will be moved to
a lower priority queue. This scheme leaves I / O bound and interactive processes in the
higher priority queues. Similarly, a process that waits too long in a lower priority queue may
be moved to a higher priority queue. This form of aging prevents starvation.
For example, consider a multilevel feedback queue scheduler with three queues,
numbered from 0 to 2.

The scheduler first executes all processes in queue 0. Only when queue 0 is empty
will it execute processes in queue 1. Similarly, processes in queue 2 will only be executed if
queues 0 and 1 are empty. A process that arrives for queue 1 will preempt a process in
queue a process in queue 1 will in turn be preempted by a process arriving for queue 0.

Scheduling

 A new job enters queue Q0 which is served FCFS. When it gains CPU, job receives 8
milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q1.
 At Q1 job is again served FCFS and receives 16 additional milliseconds. If it still does not
complete, it is preempted and moved to queue Q2.

Multilevel-feedback-queue scheduler defined by the following parameters:


 Number of queues
 Scheduling algorithms for each queue
 Method used to determine when to upgrade a process
 Method used to determine which queue a process will enter when that
process needs service

25

You might also like