0% found this document useful (0 votes)
7 views23 pages

Chapter Two OS

operating system chapter 2

Uploaded by

Efrem Mekonen
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
7 views23 pages

Chapter Two OS

operating system chapter 2

Uploaded by

Efrem Mekonen
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 23

Chapter two

Process and Process Management


2.1 Process
A process is basically a program in execution. The execution of a process must progress in a
sequential fashion. A process is defined as an entity which represents the basic unit of work to be
implemented in the system. To put it in simple terms, we write our computer programs in a text
file and when we execute this program, it becomes a process which performs all the tasks
mentioned in the program. When a program is loaded into the memory and it becomes a
process, it can be divided into four sections. Namely: stack, heap, text and data. The
following image shows a simplified layout of a process inside main memory.

 Stack: The process Stack contains the temporary data such as method/function
parameters, return address and local variables.
 Heap: This is dynamically allocated memory to a process during its run time
 Text: This includes the current activity represented by the value of Program Counter and
the contents of the processor's registers.
 Data: This section contains the global and static variables.
Processes may be of two types:
IO bound processes: spend more time doing IO than computations, have many short CPU
bursts. Word processors and text editors are good examples of such processes.

I/O Burst CPU I/O Burst CPU Burst


Burst
CPU bound processes: spend more time doing computations, few very long CPU bursts.

CPU Burst I/O CPU


Burst I/O
2.1.1 Components of a process
 Object Program: Code to be executed.
 Data: Data to be used for executing the program.
 Resources: While executing the program, it may require some resources.
1|P age
 Status: Verifies the status of the process execution. A process can run to
completion only when all requested resources have been allocated to the process.
Program
A program by itself is not a process. It is a static entity made up of program statement while
process is a dynamic entity. Program contains the instructions to be executed by processor. A
program takes a space at single place in main memory and continues to stay there. A program
does not perform any action by itself.
2.1.2 Process Life Cycle
When a process executes, it passes through different states. These stages may differ in different
operating systems, and the names of these states are also not standardized. In general, a process
can have one of the following five states at a time.

Start: This is the initial state when a process is first started/created.


Ready: The process is waiting to be assigned to a processor. Ready processes are waiting to
have the processor allocated to them by the operating system so that they can run. Process may
come into this state after Start state or while running it by but interrupted by the scheduler to
assign CPU to some other process.
Running: Once the process has been assigned to a processor by the OS scheduler, the process
state is set to running and the processor executes its instructions.
Waiting: Process moves into the waiting state if it needs to wait for a resource, such as waiting
for user input, or waiting for a file to become available.
Terminated or Exit: Once the process finishes its execution, or it is terminated by the operating
system, it is moved to the terminated state where it waits to be removed from main memory.
2.1.1 Implementation of Processes
To implement the process model, the operating system maintains a table (an array of structures),
called the process table, with one entry per process. (Some authors call these entries process
control blocks.) This entry contains information about the process’ state, its program counter,
stack pointer, memory allocation, the status of its open files, its accounting and scheduling
information, and everything else about the process that must be saved when the process is
switched from running to ready or blocked state so that it can be restarted later as if it had never
been stopped.

2|P age
2.1.2 Process Control Block
Each process is represented in the operating system by a process control block (PCB) also called a task
control block. It contains many pieces of information associated with a specific process, including these:
 Process state: The state may be new, ready, running, and waiting, halted, and so on.
 Program counter: The counter indicates the address of the next instruction to
be executed for this process.
 CPU registers: The registers vary in number and type, depending on the
computer architecture. They include accumulators, index registers, stack
pointers, and general-purpose registers, plus any condition-code
information. Along with the program counter, this state information must be
saved when an interrupt occurs, to allow the process to be continued
correctly afterward.
 CPU scheduling information: this information includes a process priority
pointer to scheduling queues, and any other scheduling parameters.
 Memory management information: This information may include such
information as the value of the base and limit registers, the page tables or the segment tables
depending on the memory system used by the operating system.
 Accounting information: This information includes the amount of CPU and real time used, time
limits, account numbers, job or process numbers, and so on.
 I/O status information: The information includes the list of I/O devices allocated to this process, a
list of open files and so on.
The PCB serves as the repository for any information which can vary from process to process.
Loader/linker sets flags and registers when a process is created. If that process gets suspended, the
contents of the registers are saved on a stack and the pointer to the particular stack frame is stored in the
PCB. By this technique, the hardware state can be restored so that the process can be scheduled to run
again.
2.1.3 Operation On Processes
The processes in the system can execute concurrently, and must be created and deleted dynamically. Thus,
the operating system must provide a mechanism for process creation and termination.
Process Creation:
 A process may create several new processes during the course of execution.
 The creating process is called a parent process whereas the new processes are called the children of
that process. Each of these processes may in turn create other process, forming a tree of processes.
 When a process creates a new process, two possibilities exist in terms of execution:
 The parent process continues to execute concurrently with its children.
 The parent waits until some or all of its children have terminated.
 There are also two possibilities in terms of the address space of the new process:
 The child process is a duplicate of the parent process.
 The child process has a program loaded into it.
Process Termination:

3|P age
 A process terminates when it finishes executing its last statement and asks the operating system to
delete it by using the exit system call. At that point, the process may return data (output) to its parent process
(via the wait system call).
 All of the resources of the process, including physical and virtual memory, open files, and I/O buffers,
are deallocated by the operating system.
 A parent may terminate the execution of one of its children for a variety of reasons, such as:
 The child has exceeded it usage of some of the resources it has been allocated.
 The task assigned to the child is no longer required.
 The parent is exiting, and the operating system does not allow a child to continue if its parent
terminates.
In UNIX, a process may terminate by using the exit system call, and its parent process may wait for that
event by using the wait system call.
Process Suspension
The main reasons for process suspension are:
 Swapping: the operating system needs to release sufficient main memory to bring in a process that is
ready to execute.
 Other OS reason: the operating system may suspend a background or utility process or a process
that is suspected of causing a problem.
 Interactive user request: a user may suspend execution of a program for purposes.
 Timing: a process may be executed periodically.
 Parent process request: a parent process may wish to suspend execution of a descendent to examine
or modify the suspended process.

4|P age
2.2 Thread
A thread is a flow of execution through the process code, with its own program counter that
keeps track of which instruction to execute next, system registers which hold its current
working variables, and a stack which contains the execution history. A thread shares with its
peer threads few information like code segment, data segment and open files. When one thread
alters a code segment memory item, all other threads see that.
A thread is also called a lightweight process. Threads provide a way to improve application
performance through parallelism. Threads represent a software approach to improving
performance of operating system by reducing the overhead thread is equivalent to a classical
process. Each thread belongs to exactly one process and no thread can exist outside a process.
Each thread represents a separate flow of control. Threads have been successfully used in
implementing network servers and web server. They also provide a suitable foundation for
parallel execution of applications on shared memory multiprocessors. The following figure
shows the working of a single-threaded and a multithreaded process.

5|P age
2.2.1 Difference between Process and Thread

No Process Thread
1 Process is heavy weight or resource intensive Thread is light weight, taking lesser resources
than a process.
2 Process switching needs interaction with operating Thread switching does not need to interact
system. with operating system.
3 In multiple processing environments, each process All threads can share same set of open files,
executes the same code but has its own memory and file child processes.
resources.

4 If one process is blocked, then no other process can While one thread is blocked and waiting, a
execute until the first process is unblocked. second thread in the same task can run.
5 Multiple processes without using threads use more Multiple threaded processes use fewer
resources. resources.
6 In multiple processes each process operates independently One thread can read, write or change another
of the others. thread's data.

2.2.2 Advantages of Thread


 Threads minimize the context switching time.
 Use of threads provides concurrency within a process.
 Efficient communication.
 It is more economical to create and context switch threads.
 Threads allow utilization of multiprocessor architectures to a greater scale and efficiency.
2.2.3 Types of Thread
Threads are implemented in following two ways:
 User Level Threads − User managed threads.
 Kernel Level Threads − Operating System managed threads acting on kernel, an
operating system core.

6|P age
2.2.4 User Level Threads
In this case, the thread management kernel is not aware of the existence of threads. The thread
library contains code for creating and destroying threads, for passing message and data between
threads, for scheduling thread execution and for saving and restoring thread contexts. The
application starts with a single thread.

Advantages
 Thread switching does not require Kernel mode privileges.

 User level thread can run on any operating system.


 Scheduling can be application specific in the user level thread.
 User level threads are fast to create and manage.
Disadvantages
 In a typical operating system, most system calls are blocking.

 Multithreaded application cannot take advantage of multiprocessing.


2.2.5 Kernel Level Threads
In this case, thread management is done by the Kernel. There is no thread management code in
the application area. Kernel threads are supported directly by the operating system. Any
application can be programmed to be multithreaded. All of the threads within an application are
supported within a single process.
The Kernel maintains context information for the process as a whole and for individual’s threads
within the process. Scheduling by the Kernel is done on a thread basis. The Kernel performs
thread creation, scheduling and management in Kernel space. Kernel threads are generally
slower to create and manage than the user threads.

7|P age
Advantages
 Kernel can simultaneously schedule multiple threads from the same process on multiple
processes.
 If one thread in a process is blocked, the Kernel can schedule another thread of the same
process.
 Kernel routines themselves can be multithreaded.
Disadvantages
 Kernel threads are generally slower to create and manage than the user threads.
 Transfer of control from one thread to another within the same process requires a mode
switch to the Kernel.
2.2.6 Difference between User-Level & Kernel-Level Thread
No User-Level Threads Kernel-Level Thread
1 User-level threads are faster to create and Kernel-level threads are slower to create and
manage. manage.
2 Implementation is by a thread library at the Operating system supports creation of
user level. Kernel threads.
3 User-level thread is generic and can run on Kernel-level thread is specific to the
any operating system. operating system.
4 Multi-threaded applications cannot take Kernel routines themselves can be
advantage of multiprocessing. multithreaded.

2.3 Multithreading Models


Some operating systems provide a combined user level thread and Kernel level thread facility.
Solaris is a good example of this combined approach. In a combined system, multiple threads
within the same application can run in parallel on multiple processors and a blocking system call
need not block the entire process. There are three types of multithreading models, which are
listed below:
 Many to many relationships.

 Many to one relationship.


 One to one relationship.
2.3.1 Many to Many Model
The many-to-many model multiplexes any number of user threads onto an equal or smaller
number of kernel threads. The following diagram shows the many-to-many threading model
where 6 user level threads are multiplexing with 6 kernel level threads. In this model, developers
can create as many user threads as necessary and the corresponding Kernel threads can run in

8|P age
parallel on a multiprocessor machine. This model provides the best accuracy on concurrency and
when a thread performs a blocking system call, the kernel can schedule another thread for
execution.

2.3.2 Many to One Model


Many-to-one model maps many user level threads to one Kernel-level thread. Thread
management is done in user space by the thread library. When thread makes a blocking system
call, the entire process will be blocked. Only one thread can access the Kernel at a time, so
multiple threads are unable to run in parallel on multiprocessors. If the user-level thread libraries
are implemented in the operating system in such a way that the system does not support them,
then the Kernel threads use the many-to-one relationship modes.

2.3.1 One to One Model


There is one-to-one relationship of user-level thread to the kernel-level thread. This model
provides more concurrency than the many-to-one model. It also allows another thread to run
when a thread makes a blocking system call. It supports multiple threads to execute in parallel on
microprocessors. The Disadvantage of this model is that creating user thread requires the
corresponding Kernel thread. OS/2, Windows NT and windows 2000 use one to one relationship
model.

9|P age
2.3.2 Inter Process Communication (IPC)
The concurrent processes executing in the operating system may be either independent or cooperating
processes. Hence, a process can be of two types:
 Independent process.
 Co-operating process.
Independent processes: A process is independent if it cannot affect or be affected by the
processes executing in the system. i.e., any process that does not share any data with any other
process is independent.
Cooperating processes: A process is cooperating if it can affect or be affected by the other
processes executing in the system. i.e., any process that shares data with other processes is a
cooperating process.
An independent process is not affected by the execution of other processes while a co-
operating process can be affected by other executing processes. Though one can think that those
processes, which are running independently, will execute very efficiently but in practical, there
are many situations when co-operative nature can be utilized for increasing computational speed,
convenience and modularity.
An environment that allows process cooperation for several reasons:
 Information sharing: Since several users may be interested in the same piece of
information (for instance, a shared file), we must provide an environment to allow
concurrent access to these types of resources.
 Computation speedup: If we want a particular task to run faster, we must break it into
subtasks, each of which will be executing in parallel with the others. Such a speedup can
be achieved only if the computer has multiple processing elements (such as CPUS or I/O
channels).
 Modularity: We may want to construct the system in a modular fashion, dividing the
10 | P a g e
system functions into separate processes or threads.
 Convenience: Even an individual user may have many tasks on which to work at one
time. For instance, a user may be editing, printing, and compiling in parallel. Cooperating
processes requires mechanisms that allow processes to communicate with one another
and to synchronize their actions.
Inter process communication (IPC) is a mechanism which allows processes to communicate
each other and synchronize their actions. The communication between these processes can be
seen as a method of co-operation between them. Processes can communicate with each other
using the following two basic ways:
 Shared Memory
 Message passing

2.3.3 Process Scheduling


Scheduling is a fundamental OS function. Scheduling is a method that is used to distribute
valuable computing resources, usually processor time, bandwidth and memory, to the various
processes, threads, data flows and applications that need them. Scheduling is done to balance the
load on the system and ensure equal distribution of resources and give some prioritization
according to set rules. This ensures that a computer system is able to serve all requests and
achieve a certain quality of service.
But, why we need scheduling of computer resources specially the CPU? The main reason is the
following.
 In order to attain the maximum utilization of the CPU, we apply the concept of
multiprogramming. In multiprogramming many processes or programs are kept in
memory at one time. When one process waits, the CPU is assigned to another
process.
 To assign one of the processes to the CPU, we use scheduling techniques that may
11 | P a g e
not necessarily use a first-in-first-out strategy.
 The objective of multiprogramming is to have some process running at all times, to maximize CPU
utilization.
 The objective of time sharing is to switch the CPU among processes so frequently that users can
interact with each program while it is running.
 In both the cases, we expect more than one process in the system but for a uniprocessor system, there
will never be more than one running process the rest will have to wait until the CPU is free and can be
rescheduled.
Scheduling Queues:
 As processes enter the system, they are put into a job queue. This queue consists of all processes in the
system.
 The processes that are residing in main memory and are ready and waiting to execute are kept on a list
called the ready queue. This queue is generally stored as a linked list. A ready queue header will contain
pointer to the first and last PCBs in the list. Each PCB has a pointer field that points to the next process in the
ready queue.
 When a process is allocated the CPU, it executes for a while and eventually quits, is interrupted, or
waits for the occurrence of a particular event, such as the completion of an I/O request; such a request may
be to a dedicated tape drive, or to a shared device, such as a disk. The list of processes waiting for a
particular I/O device is called a device queue. Each device has its own device queue.
 A common representation for a discussion of process scheduling is a queuing diagram.
 Each rectangular box represents a queue. Two types of queues are present: the ready queue and a set of
a device queues.
 The circles represent the resources that serve the queues and the arrows indicate the flow of processes
in the system.
 A new process is initially put in the ready queue. It waits in the ready queue until it is selected for
execution and is given the CPU. Once the process is allocated the CPU and is executing, one of several
events could occur:
 The process could issue an I/O request, and then be placed in an I/O queue.
 The process could create a new sub process and wait for its termination.
 The process could be removed forcibly from the CPU, as a result of an interrupt and then put back in
the ready queue.
 In the first two cases, the process eventually switches from the waiting state to the ready state, and is
then put back in the ready queue.
Here is a brief description of some of these queues:
 Ready queue – set of all processes residing in main memory, ready and waiting to execute are kept in ready
queue.
 Job queue – When a process enters a system, it is put in a job queue.
 Device queues – There may be many processes in the system requesting for an I/O. Since only one I/O request
can be entertained for a particular device, a process needing an I/O may have to wait. The list of processes
waiting for an I/O device is kept in a device queue for that particular device.
 An example of a ready queue and various device queues is shown below.

12 | P a g e
Ready queue CPU

I/O queue I/O request


I/O

Time slice
expired

Child Child Fork a child


terminates executes

Wait for an
Interrupt
interrupt
occurs

Queuing diagram representation of process scheduling.

 A process continues this cycle until it terminates, at which time it is removed from all queues and has
its PCB and resources deallocated.
Schedulers: Scheduling in a system is done by the aptly named scheduler. A process migrates between the
various scheduling queues throughout its lifetime. The operating system must select process from the queues
in some fashion. The selection process is carried out by the appropriate scheduler.
Scheduler is mainly concerned with three things:
 Throughput, or how fast it can finish a certain number of tasks from beginning to end per unit of
time
 Latency, which is the turnaround time or the time it takes to finish the task from the time of
request or submission until finish, which includes the waiting time before it could be served
 Response time, which is the time it takes for the process or request to be served, in short the
waiting time
Scheduling is largely based on the factors mentioned above and varies depending on the system
and the programming of the system's or user's preferences and objectives. In modern computers
such as PCs with large amounts of processing power and other resources and with the ability to
multitask by running multiple threads or pipelines at once, scheduling is no longer a big issue
and most times processes and applications are given free reign with extra resources, but the
scheduler is still hard at work managing requests.
There are two types of schedulers

13 | P a g e
 Long-term scheduler
 Short-term scheduler
 Mid-term scheduler
Long-term scheduler: In a batch system, there are often more processes submitted than can be executed
immediately. These processes are spooled to a mass-storage device (disk), where they are kept for later
execution. The long-term scheduler select processes form this pool and load them into memory for
execution.
Short-term scheduler: The short-term scheduler (or CPU scheduler) selects among the processes that are
ready to execute, and allocates the CPU to one of them.
 The primary distinction between these two schedulers is the frequency of their execution.
 The long-term scheduler, on the other hand, executes much less frequently.
 The long-term scheduler controls the degree of multiprogramming (number of processes in memory).
 The long-term scheduler makes a careful selection. In general most processes can be described as
either I/O bound or CPU bound.
 An I/O-bound process is one that spends more of its time doing I/O than it spends doing computations.
 A CPU-bound process is one that generates I/O requests very infrequently, using more of its time
doing computation than an I/O-bound process uses.
 It is important that the long-term scheduler select a good process mix of I/O-bound and CPU-bound
processes.
 If all processes are I/O-bound processes, the ready queue will almost always be empty, and the short-
term scheduler will have little to do.
 If all processes are CPU-bound prcesses, the waiting queue will almost always be empty, devices will
go unused, and again the system will be unbalanced.
 The short term scheduler must select a new process for the CPU quite frequently. A process may
execute for only a few milliseconds before waiting for an I/O request. Often, the short-term scheduler
executes at least once every 100 milliseconds.
 Because of the short duration of time between executions, the short-term scheduler must be very fast.
Mid-term scheduler: The mid-term scheduler is diagrammed below. The key idea in this scheduling is that
sometimes it can be advantageous to remove processes from memory (and from active contention for the
CPU), and thus to reduce the degree of programming. At some later time, the process can be reintroduced
into memory and its execution can be continued where it left off. This scheme is called swapping. The
process is swapped out and swapped in later by the medium-term scheduler.
 Swapping may be necessary to improve the process mix, or because a change in memory requirements
has overcommitted available memory, requiring memory, requiring memory to be freed up.
Swap in swap out
Partially executed swapped out processes

CPU
Ready queue
end

14 | P a g e
I/O waiting queues
I/O
Long-term scheduling: The decision to add to the pool of processes to be executed.
Medium-term scheduling: The decision to add to the number of processes that are partially or fully in main
memory
Short-term scheduling: The decision as to which available process will be executed by the processor.

new
LTS
STS
ready running

MTS
waiting

2.3.4 BASIC CONCEPTS


Context switch
Switching the CPU to another process requires saving the state of the old process and loading the saved state for the
new process. This task is known as a context switch.
 Context-switch time is pure overhead, because the system does no useful work while switching.
 Context-switch times are highly dependent on hardware support.
 A context-switch simply includes changing the pointer to the current register set. Of course, if there are
more active processes than there are register sets, the system resorts to copying register data to and from
memory, as before.
Scheduling is a fundamental operating-system function. Almost all computer resources are scheduled before use. The
CPU is, of course, one of the primary computer resources. Thus its scheduling is central to operating system design.
CPU-I/O Burst Cycle:
The success of CPU scheduling depends on the following observed property of processes: Process execution consists
of a cycle of CPU execution and I/O wait. Processes alternate back and forth between these two states. Process
execution begins with a CPU burst. That is followed by an I/O burst, which is followed by another CPU burst, then
another I/O burst, and so on. Eventually, the last CPU burst will end with a system request to terminate execution,
rather than with another I/O burst.
CPU Scheduler
Whenever the CPU becomes idle, the operating system must select one of the processes in the ready queue to be
executed. The selection processes is carried out by the short-term scheduler (CPU scheduler).
CPU scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state.
( for example, I/O request, or invocation of wait for the termination of one of the child processes)
2. when a process switched from the running state to the ready state (for example, when an interrupt occurs)
3. When a process switches from the waiting state to the ready state.(for example completion of I/O )

15 | P a g e
4. When a process terminates.
In case of 1 and 4 there is no choice in terms of scheduling, but there is a choice for 2 and 3.
When scheduling takes place only under circumstances 1 and 4, we say the scheduling scheme is nonpreemptive;
otherwise, the scheduling scheme is preemptive.
Under nonpreemptive scheduling, once the CPU has been allocated to a process, the process keeps CPU until it
releases the CPU either by terminating or by switching to the waiting state.
Dispatcher:
 Dispatcher is the module that gives control of the CPU to the process selected by the short-term scheduler.
 This function involves:
 Switching context
 Switching to user mode
 Jumping to the proper location in the user program to restart that program
 The dispatcher should be as fact as possible, given that it is invoked during every process switch.
 The time it takes for the dispatcher stop one process and start another running is known as the dispatch latency
2.3.5 Scheduling Criteria

Different CPU scheduling algorithms have different properties and may favor one class of processes over another. In
choosing which algorithm to use in a particular situation, we must consider the properties of the various algorithms.
Different criteria have been suggested, given below, for comparing CPU scheduling algorithms.

 CPU utilization. We want to keep the CPU as busy as possible.


 Throughput. Number of processes that are completed per time unit is called throughput. For long processes,
this rate may be one process per hour; for short transactions, throughput might be 10 processes per second.
 Turnaround time. The interval from the time of submission of a process to the time of completion is the
turnaround time. Turnaround time is the sum of the periods spent waiting to get into memory, waiting in the
ready queue, executing on the CPU and doing I/O.
 Waiting time. Waiting time is the sum of the periods spent waiting in the ready queue.
 Response time. Response time is the amount of time it takes to start responding, but not the time that it takes
to output that response.
It is desirable to maximize CPU utilization and throughput, and to minimize turnaround time, waiting time and
response time.
2.3.6 Scheduling Algorithms
 First come, first served: The most straightforward approach and may be referred to as first in,
first out; it simply does what the name suggests.
 Round robin: Also known as time slicing, since each task is given a certain amount of time to use
resources. This is still on a first-come-first-served basis.
 Shortest remaining time first: The task which needs the least amount of time to finish is given
priority.
 Priority: Tasks are assigned priorities and are served depending on that priority. This can lead to
the starvation of the least important tasks as they are always preempted by more

1. First-Come, First-Served Scheduling (FCFS)

 This is the simplest scheduling algorithm. In this scheme, the process that requests the CPU first is allocated
the CPU first.

16 | P a g e
 This FCFS algorithm is implemented using FIFO queue.
 When a process enters the ready queue, its PCB is linked onto the tail of the queue. When the CPU is free, it is
allocated to the process at the head of the queue.
 The code for the FCFS scheduling is simple to write and understand.
 The average waiting time under the FCFS policy is often quite long.
 FCFS is a non-preemptive. Once the CPU has been allocated to a process, that process keeps the CPU until it
releases the CPU, either by terminating or by requesting I/O.
 FCFS algorithm is particularly troublesome for time-sharing systems, where it is important that each user get a
share of the CPU at regular intervals.
Example1:
1. Consider the following set of processes that arrive at time 0, with the length of the CPU-burst time given in
milliseconds.
Process : P1 P2 P3
Burst time : 24 3 3

If the process arrives in the order P1, P2, P3 and are served in FCFS order then waiting time, turnaround time, and
Response times are given below.

Gantt chart is P1 P2 P3
0 24 27 30

Process Arrival Burst Response Waiting Turnaround


time Time time time time
P1 0 24 0 0 24
P2 0 3 24 24 27
P3 0 3 27 27 30
51

The average waiting time of a process = 51/3 = 17 milliseconds.

If the process arrives in the order P2, P3, P1 and are served in FCFS order then waiting time, turnaround time, and
Response times are given below.

Gantt chart is P2 P3 P1

0 3 6 30

Process Arrival Burst Response Waiting Turnaround


time Time time time time
P2 0 3 0 0 3
P3 0 3 3 3 6
P1 0 24 6 6 30
9

The average waiting time of a process = 9/3 = 3 milliseconds.

17 | P a g e
 The average time under FCFS policy in general is not minimal, and may vary substantially if the process CPU-
burst times vary greatly.

Example 2. For the following data perform the same as in previous problem.

Process : P1 P2 P3 P4 P5

Arrival time : 0 2 4 6 8
Burst time : 3 6 4 5 2
Gantt chart:
P1 P2 P3 P4 P5
0 3 9 13 18 20

Process Arrival Burst Response Waiting Turnaround


time Time time time time
P1 0 3 3 3 3
P2 2 6 1 1 7
P3 4 4 5 5 9
P4 6 5 7 7 12
P5 8 2 10 10 12
26

The average waiting time of a process = 26/5 = 5.2 milliseconds.

2. Shortest-Job-First Scheduling (SJF)

 This algorithm associates with each process the length of the latter’s next CPU burst. When the CPU is
available, it is assigned to the process that has the smallest next CPU burst. If two processes have the same
length next CPU burst, FCFS scheduling is used to break the tie.
Example1: consider the following set of processes, with the length of the CPU burst time given in milliseconds.
Process : P1 P2 P3 P4
Arrival time : 0 0 0 0
Burst time : 6 8 7 3
Find the average waiting time, response time and show the schedule using Gantt chart.
Gantt chart:
P4 P1 P3 P2
0 3 9 16 24

Process Arrival Burst Response Waiting Turnaround


time Time time time time
P1 0 6 3 3 9
P2 0 8 16 16 24
P3 0 7 9 9 16
P4 0 3 0 0 3
28
18 | P a g e
The average waiting time of a process = 28/4 = 7 milliseconds.
 if we were using he FCFS scheduling scheme, then the average waiting time would be 10.25
milliseconds.
Example2: consider the following set of processes, with the length of the CPU burst time given in milliseconds.
Process : P1 P2 P3 P4 P5
Arrival time : 0 2 4 6 8
Burst time : 3 6 4 5 2
Find the average waiting time and response times using Gantt charts.
Gantt chart:
P1 P2 P5 P3 P4
0 3 9 11 15 20

Process Arrival Burst Response Waiting Turnaround


time Time time time time
P1 0 3 3 3 6
P2 2 6 1 1 7
P3 4 4 7 7 11
P4 6 5 9 9 14
P5 8 2 1 1 3
21

Average waiting time of a process = 21/5 = 4.2 milliseconds.

Example3: consider the following set of processes, with the length of the CPU burst time given in milliseconds.
Process : P1 P2 P3 P4
Arrival time : 0 1 2 3
Burst time : 8 4 9 5
Find the average waiting time and response times using Gantt charts.
Gantt chart:

P1 P2 P4 P1 P3
0 1 5 10 17 26
Process Arrival Burst Response Waiting Turnaround
time Time time time time
P1 0 8 0 9
P2 1 4 0 0
P3 2 9 15 15
P4 3 5 2 2
26

Average waiting time of a process = 26/4 = 6.5 milliseconds.

19 | P a g e
 In this example, P1 is preempted because the next arrived process P2 has 4 milliseconds of
burst time, which is less than the remaining time for process P1.
 A non-preemptive SJF scheduling would result in an average time of 7.75 milliseconds.

 The SJF scheduling algorithm is provably Optimal, in that it gives the minimum average waiting time for a
given set of processes.
 Although the SJF algorithm is optimal, it cannot be implemented at the level of shortest-term CPU
scheduling. There is no way to know the length of the next CPU burst.
 One approach is to try to approximate SJF scheduling. We may not know the length of the next CPU burst; we
may able to predict its value. Thus, by computing an approximation of the length of the next CPU burst, we
can pick the process with the shortest predicted CPU burst.
 The SJF algorithm is either preemptive or nonpreemptive. Preemptive SJF scheduling is called shortest-
remaining-time-first scheduling.

3. Priority Scheduling
 In priority scheduling algorithm, a priority is associated with each process, and the CPU is allocated to the
process with the highest priority. Equal priority processes are scheduled in FCFS order.
 Priorities are generally some fixed range of numbers, such as 0to7, or 0 to 1023. However, there is no general
agreement on whether 0 is the highest or lowest priority. Some systems use low numbers to represent low
priority; others use low numbers for high priority.
 In this algorithm we assume that low numbers represent high priority.

Example1: consider the following set of processes, assumed to have arrived at time 0, in the order P1, P2, P3, P4, P5,
with the length of the CPU-burst time given in milliseconds. Find the average waiting time and turnaround time using
Gantt charts.
Process : P1 P2 P3 P4 P5
Burst time : 10 1 2 1 5
Priority : 3 1 3 4 2
Sol:
Gantt chart: P2 P5 P1 P3 P4

0 1 6 16 18 19

Process Arrival Burst Priority Response Waiting Turnaround


time Time time time time
P1 0 10 3 6 6 16
P2 0 1 1 0 0 1
P3 0 2 3 16 16 18
P4 0 1 4 18 18 19
P5 0 5 2 1 1 6
41

Average waiting time = 41/5 = 8.2 milliseconds.

Example2: consider the following set of processes, in the order P1, P2, P3, P4, with the length of the CPU-burst time
given in milliseconds and Priorities. Find the average waiting time and turn around time using Gantt charts.

20 | P a g e
Process : P1 P2 P3 P4
Arrival time : 0 2 4 6
Burst time : 8 4 9 5
Priority : 3 1 4 2
Gantt chart:

P1 P2 P4 P1 P3
0 2 6 11 17 26

Process Arrival Burst Priority Response Waiting Turnaround


time Time time time time
P1 0 8 3 0 9 17
P2 2 4 1 0 0 4
P3 4 9 4 13 13 22
P4 6 5 2 0 0 5
22

Average waiting time of a process is = 22/4 = 5.5 milliseconds.


 Priorities can be defined either internally or externally.
 Priorities scheduling can be either preemptive or nonpreemptive;
 A major problem with priority scheduling is indefinite blocking or starvation. A process that is ready to run
but lacking the CPU can be considered blocked, waiting for the CPU. In a heavily loaded computer system, a
steady stream of high-priority processes can prevent a low-priority process from ever getting CPU.
 A solution to the indefinite blockage of low-priority processes is aging.
 Aging is a technique or gradually increasing the priority of processes that wait in the system for a long time.
4. Round-Robin Scheduling
 The round-robin (RR) scheduling algorithm is designed especially for time-sharing systems.
 A small unit of time, called a time quantum, or time slice, is defined. The ready queue is treated as a circular
queue. The CPU scheduler goes around the ready queue, allocating the CPU to each process for a time interval
of up to 1-time quantum.
 To implement the RR scheduling, we keep the ready queue as FIFO queue of processes. New processes are
added to the tail of the ready queue. The CPU scheduler picks the first processes from the ready queue, sets a
timer to interrupt after 1-time quantum, and dispatches the process.
 The average waiting time under the RR policy, however, is often quite long.
Example 1: consider the following set of processes, assumed to have arrived at time 0, in the order P1, P2, P3, P4,
P5, with the length of the CPU-burst time given in milliseconds. Find the average waiting time and turn around time
using Gantt charts for RR scheduling. (Time quantum is 3 milliseconds).
Process : P1 P2 P3 P4 P5
Burst time : 10 1 2 1 5
Gantt chart:

P1 P2 P3 P4 P5 P1 P5 P1 P1
0 3 4 6 7 10 13 15 18 19

Process Arrival Burst Response Waiting Turnaround


time Time time time time
P1 0 10 0 9 19
21 | P a g e
P2 0 1 3 3 4
P3 0 2 4 4 6
P4 0 1 6 6 7
P5 0 5 5 10 15
32

Average waiting time of a process is = 32/5 = 6.2 milliseconds.

 RR scheduling is a preemptive scheduling.


 The performance of RR scheduling is heavily depending on time quantum.
 If time quantum is very large then RR policy is the same as FCFS.
2.3.7 Algorithm Evaluation
 How do we select for a CPU scheduling for a particular system?
 To select an algorithm, we must first define the relative importance of different measures as mentioned below.
 Maximize CPCU utilization under the constraint that the maximum response time is 1 second.
 Maximize throughput such that turnaround time is linearly proportional to total execution time.
One the selection criteria have been defined, we want to evaluate the various algorithms under consideration.
 Deterministic modeling
 Queueing models.
 Simulations.
 Implementations.
Assignments:
1. Consider the following set of processes, with the length of the CPU-burst time given in milliseconds:
Process : P1 P2 P3 P4 P5
Arrival time : 0 1 2 3 4
Burst time : 10 1 2 1 5
Priority : 3 1 3 4 2
a. Draw four Gantt charts illustrating the execution of these processes using FCFS, SJF a nonpreemptive priority (a
small priority number implies a higher priority), and RR (quantum = 1) scheduling.
b. what is the turnaround time of each process for each of the scheduling algorithms in part a?
c. what is the waiting time of each process for each of the scheduling algorithms in part a?
d. which of the schedules in part a resulting the minimal average waiting time( over all processes)?
2. Suppose that the following processes arrive for execution at the times indicated. Each process will run the listed
amount of time. In answering the questions, use nonopreemptive scheduling and base all decisions on the information
you have at the time the decision must be made.

Process : P1 P2 P3
Arrival time : 0.0 0.4 1.0
Burst time : 8 4 1
1. What is the average turnaround time for these processes with the FCFS scheduling algorithm?
2. What is the average turnaround time for these processes with the SJF scheduling algorithm?
3. The SJF algorithm is supposed to improve performance, but notice that we choose to run process P1 at time 0
because we did not know that two shorter processes would arrive soon. Compute what the average turnaround
time will be if the CPU is the idle for the first 1 unit and then SJF scheduling is used. Remember that
processes P1 and P2 are waiting during this idle time, so their waiting time may increase, this algorithm could
be known as future-knowledge scheduling.

22 | P a g e
23 | P a g e

You might also like