Schedulingalgorithms and Algorithm Evaluation

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 38

 Maximum CPU utilization obtained with multiprogramming.

 Scheduling :- A method of assigning CPU to a process.

 Scheduling is the basis of multi-programmed OS.


• Maximum CPU utilization obtained
with multiprogramming
• CPU–I/O Burst Cycle – Process
execution consists of a cycle of CPU
execution and I/O wait
• CPU burst followed by I/O burst
• CPU burst distribution is of main
concern

CPU burst

CPU burst: the amount of time the


process uses the processor before it is
no longer ready
Types of CPU bursts:
long bursts -- process is CPU bound
short bursts -- process I/O bound
 A module that selects a process, for assigning CPU to
it .

 It Involves 2 Steps :-
 Switching Context.
 Jumping to the proper location in the
program to re-start the program.

 Dispatch Latency :- Time it takes for the dispatcher to


stop one process and start
another running.
 Non-Preemptive Scheduling
Once A process is allocated the CPU, it does not
leave
unless :-
 It has to wait for an I/O request.
 It terminates.

 Preemptive Scheduling
OS can force (preempt) A process from CPU at
anytime. For example :-
 To allocate CPU to another higher priority
processes.
 Due to end of time slice.
Types of schedulers:
1. long-term scheduler:
• selects process and loads it into memory for execution
• decides which process to start based on order and priority
• not used in timesharing systems
2. medium-term scheduler:
• schedule processes based on resources they require (memory, I/O)
• suspend processes for which adequate resources are not currently
available
• commonly, main memory is the limiting resource and the memory
manager acts as the medium term scheduler
3. short-term scheduler (CPU scheduler):
• shares the processor among the ready (runnable) processes
• crucial the short-term scheduler be very fast -- a fast decision is more
important than an excellent decision
• if a process requires a resource (or input) that it does not have, it is
removed from the ready list (and enters the WAITING state)
• uses a data structure called a ready list to identify ready processes
• started in response to a clock interrupt or when a process is suspended or
exits
1. When A process switches from the running state to waiting
state ( due to an I/O request ).

2. When A process switches from the running state to ready state

(due to end of time slice ).

3. When A process switches from the waiting state to ready state

(at completion of I/O ).

4. When A process terminates.


 CPU Utilization – Keep the CPU as busy as possible.

 Throughput – Number of processes that complete their execution


per
time unit.

 Turnaround Time – Amount of time to execute A particular process.

 Waiting Time – Amount of time A process has been waiting in the


ready queue.

 Response Time – Amount of time it takes from when A request was


submitted until the first response is produced, not
output (for time-sharing
environment).
First-Come-First-Served Scheduling
Round-Robin Scheduling
Priority Scheduling
Shortest-Job-First Scheduling
Simplest
algorithm. Non-
preemptive.
Processes assigned in the order they
request. Single queue of ready processes .
FIFO queue structure.
When the process joins the ready queue , it is linked to
the tail of the FIFO queue.
When the CPU is idle, the process at the head of the
FIFO queue is allocated to the CPU and deleted from the
queue.
Process Burst Time(ms)
P1 24
P2 3
P3 3

P1 P2 P3
0 24 27 30
Chart 1

Waiting Time for P1=0; P2=24; P3=27


Average waiting time = (0+24+27)/3=17 ms
Average turnaround time = (24+27+30)/3=27 ms
ADVANTAGE
S Easy to understand.
Easy to program.
Single queue keeps track of all ready
processes.
Picking a process to run, just requires
removing one from the head of the queue.
Adding a new process or unblocked process
just requires attaching it to the tail of the queue.
DISADVANTAGES

The average waiting time is often quite long .


Its average waiting time varies if the CPU
burst times vary greatly.
Small process wait for one big
process. Not suited for time sharing
systems.
Designed for time sharing systems.
Preemptive.
Process assigned a time interval, called quantum.
CPU scheduler allocates each process in the ready queue
one time slice at a time.
Follow FIFO queue structure.
Processes allocated to the CPU may have the current CPU
burst:-
1. equal to the time slice
2. smaller than the time slice
3. greater than the time slice

In first two cases, process will release the CPU by its own.
In the third case, the current process is preempted.
Process Burst time (ms)
P1 24
P2 3
P3 3

Duration of time slice = 4 ms

P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30

Waiting time, P1 = 0 + (10 – 4)=6


P2 = 4
P3 = 7

Average waiting time =(6+4+7)/3= 17/3


=

5.66 ms
ADVANTAGE
S Simple and easy to implement.
Each processes get equal chance to
execute. Handling all processes without
priority.
Starvation free.

DISADVANTAGES
Depend upon the length of the time slice.
Same as FCFS, if time slice is indefinitely
large.
Small time slice will deteriorates due to
frequent context
switching.
► A priority number (integer) is associated with each
process.

► Smallest integer ≡ Highest priority.

► The CPU is allocated to the process with highest priority.

► Can be Preemptive or Non-preemptive.

► Equal priority processes are scheduled on FCFS.


Process Priority Burst time (ms)
P1 3 10
P2 1 1
P3 3 2
P4 4 1
P5 2 5

P2 P5 P1 P3 P4
0 1 6 16 18 19
Chart 1

Average waiting time = (6+0+16+18+1) / 5 = 41/5 =8.2


ms
DISADVANTAGES
► If system eventually crashes, all low priority processes get lost.
► Indefinite blocking or Starvation.

ADVANTAGES
► Aging :- As time increases , increase in the priority of a process.
► Simplicity.
► Suitable for applications with varying time and resource
requirement.
► Length of CPU burst of each process is considered.

► Process with the smallest CPU burst, will be executed first.


► In case of tie between processes, FCFS is used.
► SJF is optimal :- Gives minimum average waiting time for
a given set of processes.
ProcessArriva l Time Burst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3

SJF scheduling chart

P 4
P 1
P3 P2
0 3 9 16 24

Average waiting time = (3 + 16 + 9 + 0) / 4 = 7


Process Arrival time (ms) Burst time (ms)
P1 0 8
P2 1 4
P3 2 9
P4 3 5

 SJF Preemptive Scheduling,


P1 P2 P4 P1 P3
0 1 5 10 17 26
Chart 1
Average waiting time = [(10-1)+(1-1)+(17-2)+(5-3)]/4 = 26/4 = 6.5 ms
 SJF Non-preemptive scheduling,
P1 P2 P4 P3
0 8 12 17 26
Chart 2
Average waiting time =[0+(8-1)+(17-2)+(12-3)]/4= 7.75 ms
ADVANTAGE
S ► Produces the minimum average turnaround
time.
► Reduces average waiting time.

DISADVANTAGES
► Accurate length of CPU burst is not known.
► Some risk of Starvation for longer processes.
Multilevel Queue

 Ready queue is partitioned into separate queues, eg:


• foreground (interactive)
• background (batch)
 Process permanently in a given queue
 Each queue has its own scheduling algorithm:
• foreground – RR
• background – FCFS
 Scheduling must be done between the queues:
• Fixed priority scheduling; (i.e., serve all from foreground then from
background). Possibility of starvation.
• Time slice – each queue gets a certain amount of CPU time which it can
schedule amongst its processes; i.e., 80% to foreground in RR
• 20% to background in FCFS
Multilevel Queue Scheduling
Multilevel Feedback Queue

 A process can move between the various queues; aging


can be implemented this way
 Multilevel-feedback-queue scheduler defined by the
following parameters:
• number of queues
• scheduling algorithms for each queue
• method used to determine when to upgrade a process
• method used to determine when to demote a process
• method used to determine which queue a process will enter when that
process needs service
Example of Multilevel Feedback
Queue
 Three queues:
• Q0 – RR with time quantum 8 milliseconds
• Q1 – RR time quantum 16 milliseconds
• Q2 – FCFS

 Scheduling
• A new job enters queue Q0 which is served
FCFS
 When it gains CPU, job receives 8
milliseconds
 If it does not finish in 8 milliseconds,
job is moved to queue Q1
• At Q1 job is again served FCFS and receives 16
additional milliseconds
 If it still does not complete, it is
preempted and moved to queue Q2
Multilevel Feedback
Queue
 A process can move between the various
queues;
 Multilevel-feedback-queue scheduler defined
by the following parameters:
1. number of queues
2. scheduling algorithms for each queue
Example of Multilevel Feedback
Queue
Scheduling algorithm CPU Overhead Throughput Turnaround time Response time

First In First Out Low Low High Low

Shortest Job First Medium High Medium Medium

Priority
Medium Low High High
based
scheduling

Round-robin scheduling High Medium Medium High

Multilevel Queue
High High Medium Medium
scheduling
Multiprocessor Scheduling
'' On a multiprocessor, scheduling is two
dimensional. The scheduler has to decide which
process to run and which CPU to run it on. This extra
dimension greatly complicates scheduling on
multiprocessors.
Algorithm Evaluation
• How to select CPU-scheduling algorithm for an OS?
• Determine criteria, then evaluate algorithms
• Deterministic modeling
 Type of analytic evaluation
 Takes a particular predetermined workload and defines the performance
of each algorithm for that workload
• Consider 5 processes arriving at time 0:
Deterministic Evaluation
 For each algorithm, calculate minimum average waiting
time
 Simple and fast, but requires exact numbers for input,
applies only to those inputs
 FCFS is 28ms:

 Non-preemptive SFJ is 13ms:

 RR is 23ms:
Queuing models
• Describes the arrival of processes, and CPU and I/O
bursts probabilistically
 Commonly exponential, and described by mean
 Computes average throughput, utilization, waiting time, etc
• Computer system described as network of servers,
each with queue of waiting processes
 Knowing arrival rates and service rates
 Computes utilization, average queue length, average wait time, etc
Little’s Formula
 n = average queue length
 W = average waiting time in queue
 λ = average arrival rate into queue
 Little’s law – in steady state, processes leaving queue
must equal processes arriving, thus:
n=λxW
 Valid for any scheduling algorithm and arrival distribution
 For example, if on average 7 processes arrive per
second, and normally 14 processes in queue, then
average wait time per process = 2 seconds
Simulations
 Queueing models limited
 Simulations more accurate
• Programmed model of computer system
• Clock is a variable
• Gather statistics indicating algorithm performance
• Data to drive simulation gathered via
 Random number generator according to probabilities
 Distributions defined mathematically or empirically
 Trace tapes record sequences of real events in real systems
Evaluation of CPU Schedulers by
Simulation
Implementation
 Even simulations have limited accuracy
 Just implement new scheduler and test in real systems
• High cost, high risk
• Environments vary
 Most flexible schedulers can be modified per-site or per-system
 Or APIs to modify priorities
 But again environments vary

You might also like