Scheduling Algorithms

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 75

Process Scheduling

Multiprogramming is managed by an special arrangement known as process scheduler. It is responsible to ensure


maximum CPU utilization. Higher Degree of multi-programmability.

What is CPU Utilization


It is the amount of time when the CPU is busy and executing any process. It is desired to keep CPU busy all the time.

What is Degree of multi programmability.

Maximum number of programs running at a time is know as degree of multiprogrammability.

Types of Scheduling
There are two types of process scheduling policies:
Non Preemptive Scheduling Preemptive Scheduling

Non Preemptive Scheduling


Non-preemptive algorithms are designed so that once a process enters the running state, it is not removed from the processor until it has completed its service time. context_switch() is called only when the process terminates or blocks.

Preemptive Scheduling
Preemptive algorithms are driven by the notion of prioritized computation. The process with the highest priority should always be the one currently using the processor. If a process is currently using the processor and a new process with a higher priority enters, the ready list, the process on the processor should be removed and returned to the ready list until it is once again the highest-priority process in the system.

Preemptive Scheduling
context_switch() is called even when the process is running usually done via a timer interrupt or based on priority .

Advantages/Disadvantages of Non Preemptive Scheduling


Advantages of Non-Preemptive Scheduling:
More control on how the CPU is used. Simple to implement.

Advantages of Preemptive scheduling:


More robust, one process cannot monopolize the CPU Fairness. The OS makes sure that CPU is shared fairly among all running process.

Goals for Scheduling


Make sure your scheduling strategy is good enough with the following criteria: Utilization/Efficiency: keep the CPU busy 100% of the time with useful work Throughput: maximize the number of jobs processed per unit of time. Turnaround time: amount of time to execute a particular process

Goals for Scheduling


Waiting time: Sum of times spent in ready queue - Minimize this Response Time: time from submission till the first response is produced, minimize response time for interactive users Fairness: make sure each process gets a fair share of the CPU

Scheduling Algorithm Optimization Criteria

Max CPU utilization Max throughput Min turnaround time Min waiting time Min response time

Dispatcher
Dispatcher module shift the process from on state to another state:
By context switching jumping to the proper location in the user program to restart that program

Dispatch latency time it takes for the dispatcher to stop one process and start another running

Process Scheduling Queues


Every Device (CPU, block, character, files) have a scheduling queue of waiting processes

Process Scheduling Queues


Job queue set of all processes in the system Ready queue set of all processes residing in main memory, ready and waiting to execute Device queues set of processes waiting for an I/O device Processes migrate among the various queues

Queuing Diagram

15

Representation of Process Scheduling

Schedulers
Scheduling is a fundamental O.S. function. Schedulers affect the performance of the system.
Types of Scheduler
Short term Scheduler Medium term Scheduler Long term Scheduler

Schedulers
Short term scheduler: Select a process for execution from the ready queue for CPU allocation. Medium term scheduler (swapper): determine which process to swap in/out of disk to/from memory. Long term scheduler: Determine which processes are admitted into the system or in the ready queue.

Scheduling

Schedulers (Cont.)
Short-term scheduler is invoked very frequently (milliseconds) (must be fast) Long-term scheduler is invoked very infrequently (seconds, minutes) (may be slow)

Schedulers (Cont.)
The long-term scheduler controls the degree of multiprogramming

Nature of process
Processes can be described as either:
I/O-bound process spends more time doing I/O than computations, many short CPU bursts CPU-bound process spends more time doing computations; few very long CPU bursts

Alternating Sequence of CPU And I/O Bursts

Nature of process
Cooperating Process Competitive Process

Cooperating Processes
Independent process cannot affect or be affected by the execution of another process Cooperating process can affect or be affected by the execution of another process Advantages of process cooperation
Information sharing Computation speed-up Modularity Convenience

Competitive Process
Sharing of resources is important.
Deadlock may be occur. Proper Mutual Exclusion is required.
Critical Section

Interprocess Communication (IPC)


Mechanism for processes to communicate and to synchronize their actions Types of IPC
Direct Communication Indirect Communication

Interprocess Communication (IPC)


IPC facility provides two operations:
send(message) message size fixed or variable receive(message)

If P and Q wish to communicate, they need to:


establish a communication link between them exchange messages via send/receive

Implementation of communication link


physical (e.g., shared memory, hardware bus) logical (e.g., logical properties)

Direct Communication
Processes must name each other explicitly:
send (P, message) send a message to process P receive(Q, message) receive a message from process Q

Properties of communication link


Links are established automatically A link is associated with exactly one pair of communicating processes Between each pair there exists exactly one link The link may be unidirectional, but is usually bidirectional

Indirect Communication
Messages are directed and received from mailboxes (also referred to as ports)
Each mailbox has a unique id Processes can communicate only if they share a mailbox

Types of Scheduling Algorithms


What the scheduler should optimize for is not the same in all systems and environments
Batch Interactive Real Time

31

Goals of various scheduling algorithms


For All systems
Fairness: giving each process a fair share of CPU Policy enforcement: seeing the stated policy is carried out Balance : Keeping all parts of the system busy

BATCH goals
Can incorrectly influence managers of interactive systems because they used to be good goals ...
Throughput: Maximum job per hour Turnaround time: Minimize the time between submission and termination CPU utilization: Keep the CPU busy all the time

33

Interactive Environments Scheduling Needs


Preemption essential to keep one process from blocking the CPU Blocking may not be intentional but could happen because of a bug

34

Goals of various scheduling algorithms


Interactive system
Response time: respond to requests quickly Proportionality: meet users expectation

Real Time Constraints


Preemption may NOT be needed; may defeat goals Processes know they cant run for long periods of time they do their work and block (suspend themselves) How different from interactive systems?
Interactive systems are general purpose and run arbitrary programs.

36

Real Time Systems scheduling goals


Characteristics:
Deadlines that must be met Deadlines that usually should be met

Foremost goal meet deadlines

37

Whats a Good Scheduling Algorithm?


Environment dependent (batch, interactive, RTS) Others are always true
Fairness Policy enforcement Balance

38

Whats a Good Scheduling Algorithm?


Or are they?
Fairness comparable processes get comparable service Policy enforcement policies vary with equivalence classes of processes Balance all parts of the system are busy when possible but not soooo busy as to hurt throughput.

Three Level Scheduling


Batch systems allow scheduling at 3 levels

April 1, 2002

40

First-Come, First-Served (FCFS) Scheduling


Process Burst Time P1 24 P2 3 P3 3 Suppose that the processes arrive in the order: P1 , P2 , P3 The Gantt Chart for the schedule is:
P1 0 24 P2 27 P3 30

Waiting time for P1 = 0; P2 = 24; P3 = 27 Average waiting time: (0 + 24 + 27)/3 = 17

FCFS Scheduling (Cont)


Suppose that the processes arrive in the order P2 , P3 , P1 The Gantt chart for the schedule is:
P2 0 3 P3 6 P1 30

Waiting time for P1 = 6; P2 = 0; P3 = 3 Average waiting time: (6 + 0 + 3)/3 = 3 Much better than previous case.

Shortest-Job-First (SJR) Scheduling


Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time SJF is optimal gives minimum average waiting time for a given set of processes

Shortest-Job-First (SJR) Scheduling


Two schemes:
nonpreemptive once CPU given to the process it cannot be preempted until completes its CPU burst preemptive if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is know as the ShortestRemaining-Time-First (SRTF)

Example of Non-Preemptive SJF


Process Arrival Time P1 0.0 P2 2.0 P3 4.0 P4 5.0 SJF (non-preemptive)
P1 0 3 7

Burst Time 7 4 1 4

P3 8

P2 12

P4 16

Average waiting time = (0 + 6 + 3 + 7)/4 = 4

Example of Preemptive SJF


Process Arrival Time Burst Time P1 0.0 7 P2 2.0 4 P3 4.0 1 P4 5.0 4 SJF (preemptive)
P1
0 2

P2
4

P3
5

P2
7

P4
11

P1
16

Average waiting time = (9 + 1 + 0 +2)/4 = 3

Priority Scheduling
A priority number (integer) is associated with each process The CPU is allocated to the process with the highest priority (smallest integer highest priority)
Preemptive nonpreemptive

Priority Scheduling
SJF is a priority scheduling where priority is the predicted next CPU burst time Problem Starvation low priority processes may never execute Solution Aging as time progresses increase the priority of the process

Round Robin (RR)


Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue.

Example of RR with Time Quantum = 20


Process P1 P2 P3 P4 The Gantt chart is:
P1 0 20 P2 37 P3 57 P4 77

Burst Time 53 17 68 24
P1 P3 97 117 P4 P1 P3 P3

121 134 154 162

Typically, higher average turnaround than SJF, but better response

Round-robin -- interactive
Length of the quantum?
context switching takes time

51

Time Quantum and Context Switch Time

Turnaround Time Varies With The Time Quantum

Multilevel Queue
Ready queue is partitioned into 2 types of separate queues: foreground (interactive) background (batch) Each queue has its own scheduling algorithm
foreground RR background FCFS

Multiple Level Queues


Examples:
Priority-based system with different queues for each priority level. Put all CPU-bound jobs in 1 queue and all I/Obound jobs in another. Alternately select jobs from each queue to keep system balanced. Put batch jobs background queue & interactive jobs in a foreground queue; treat foreground queue more favorably than background queue.
55

Multilevel Queue
Scheduling must be done between the queues
Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of starvation. Time slice each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e., 80% to foreground in RR 20% to background in FCFS

Multilevel Queue Scheduling

Multilevel Feedback Queue


A process can move between the various queues; aging can be implemented this way Multilevel-feedback-queue scheduler defined by the following parameters:
number of queues scheduling algorithms for each queue

Multilevel Feedback Queue


This is a variation of MLQ, where processes (jobs) are not permanently assigned to a queue when they enter the system. In this approach, if a process exhausts its time quantum (i.e., it is CPU-bound), it is moved to another queue with a longer time quantum and a lower priority. The last level usually uses FCFS algorithm in this scheme.

Multilevel Feedback Queue


method used to determine when to upgrade a process method used to determine when to demote a process method used to determine which queue a process will enter when that process needs service

Example of Multilevel Feedback Queue


Three queues:
Q0 RR with time quantum 8 milliseconds Q1 RR time quantum 16 milliseconds Q2 FCFS

Example of Multilevel Feedback Queue


Scheduling
A new job enters queue Q0 which is served RR. When it gains CPU, job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q1. At Q1 job is again served RR and receives 16 additional milliseconds. If it still does not complete, it is preempted and moved to queue Q2 for FCFS.

Multilevel Feedback Queues

Multiple-Processor Scheduling
CPU scheduling more complex when multiple CPUs are available Homogeneous processors within a multiprocessor Load sharing Asymmetric multiprocessing only one processor accesses the system data structures, improving the need for data sharing

Multiple-Processor Scheduling

Different rules for different processors. Load sharing in the distribution of work, such that all processors have an equal amount to do. Each processor can schedule from a common ready queue OR can use a master slave arrangement.

Multiple-Processor Scheduling
Master/slave assignment: Kernel functions always run on a particular processor. Other processors execute user processes. Peer assignment: OS can execute on any processor. Each processor does its own scheduling from the pool of available processes.

66

Guaranteed scheduling interactive


Different approach -- make real promises about performance to users and then live up to them Example: If there are n users logged in while you are working, you will receive about 1/n of the CPU power.

67

Guaranteed scheduling -interactive


To make good,
system keeps track of how much CPU each process has had since its creation. Computes the amount of CPU each is entitled to (time since creation divided by n) Compute the ratio of actual CPU time consumed to CPU time entitled Run the process with the lowest ratio until its ratio passes its closest competitor

Lottery scheduling -- interactive


Sometimes difficult to implement a way to live up to promises Lottery scheduling gives similarly predictable results with a simpler implementation Give processes lottery tickets for system resources such as CPU time

69

Lottery scheduling -- interactive


When scheduling decision needs to be made, lottery ticket chosen at random and the process holding that ticket gets the resource. Applied to CPU scheduling, system might hold a lottery 50 times a second with each winner getting 20ms of CPU time as its prize.

Interesting properties of Lottery Scheduling


Highly responsive -- if new process shows up and gets some tickets, it has a chance of winning right away

71

Interesting properties of Lottery Scheduling


Cooperating processes may exchange tickets
When client process sends a message to a server process and then blocks, it can give all its tickets to the server ... When server is done, it returns the tickets ... And in absence of clients, server needs no tickets

Solves some hard problems

Fair-share scheduling interactive


Have been assuming each process is scheduled on its own, independent of knowing its owner Some systems schedule processes based on usage allocated to each user, independent of the number of processes that user has
2 users each promised 50% of the CPU; they each get 50% no matter how many processes they have in existence.
73

Policy versus Mechanism


Separate what is allowed to be done with how it is done
A process knows which of its children threads are important and need priority None of the algorithms discussed so far accept input from user processes about scheduling decisions

74

Policy versus Mechanism


Separate scheduling mechanism from scheduling policy Scheduling algorithm parameterized somehow
mechanism in the kernel (operating system)

Parameters filled in by user processes


policy set by user process

You might also like