Chapter Two OS
Chapter Two OS
Stack: The process Stack contains the temporary data such as method/function
parameters, return address and local variables.
Heap: This is dynamically allocated memory to a process during its run time
Text: This includes the current activity represented by the value of Program Counter and
the contents of the processor's registers.
Data: This section contains the global and static variables.
Processes may be of two types:
IO bound processes: spend more time doing IO than computations, have many short CPU
bursts. Word processors and text editors are good examples of such processes.
2|P age
2.1.2 Process Control Block
Each process is represented in the operating system by a process control block (PCB) also called a task
control block. It contains many pieces of information associated with a specific process, including these:
Process state: The state may be new, ready, running, and waiting, halted, and so on.
Program counter: The counter indicates the address of the next instruction to
be executed for this process.
CPU registers: The registers vary in number and type, depending on the
computer architecture. They include accumulators, index registers, stack
pointers, and general-purpose registers, plus any condition-code
information. Along with the program counter, this state information must be
saved when an interrupt occurs, to allow the process to be continued
correctly afterward.
CPU scheduling information: this information includes a process priority
pointer to scheduling queues, and any other scheduling parameters.
Memory management information: This information may include such
information as the value of the base and limit registers, the page tables or the segment tables
depending on the memory system used by the operating system.
Accounting information: This information includes the amount of CPU and real time used, time
limits, account numbers, job or process numbers, and so on.
I/O status information: The information includes the list of I/O devices allocated to this process, a
list of open files and so on.
The PCB serves as the repository for any information which can vary from process to process.
Loader/linker sets flags and registers when a process is created. If that process gets suspended, the
contents of the registers are saved on a stack and the pointer to the particular stack frame is stored in the
PCB. By this technique, the hardware state can be restored so that the process can be scheduled to run
again.
2.1.3 Operation On Processes
The processes in the system can execute concurrently, and must be created and deleted dynamically. Thus,
the operating system must provide a mechanism for process creation and termination.
Process Creation:
A process may create several new processes during the course of execution.
The creating process is called a parent process whereas the new processes are called the children of
that process. Each of these processes may in turn create other process, forming a tree of processes.
When a process creates a new process, two possibilities exist in terms of execution:
The parent process continues to execute concurrently with its children.
The parent waits until some or all of its children have terminated.
There are also two possibilities in terms of the address space of the new process:
The child process is a duplicate of the parent process.
The child process has a program loaded into it.
Process Termination:
3|P age
A process terminates when it finishes executing its last statement and asks the operating system to
delete it by using the exit system call. At that point, the process may return data (output) to its parent process
(via the wait system call).
All of the resources of the process, including physical and virtual memory, open files, and I/O buffers,
are deallocated by the operating system.
A parent may terminate the execution of one of its children for a variety of reasons, such as:
The child has exceeded it usage of some of the resources it has been allocated.
The task assigned to the child is no longer required.
The parent is exiting, and the operating system does not allow a child to continue if its parent
terminates.
In UNIX, a process may terminate by using the exit system call, and its parent process may wait for that
event by using the wait system call.
Process Suspension
The main reasons for process suspension are:
Swapping: the operating system needs to release sufficient main memory to bring in a process that is
ready to execute.
Other OS reason: the operating system may suspend a background or utility process or a process
that is suspected of causing a problem.
Interactive user request: a user may suspend execution of a program for purposes.
Timing: a process may be executed periodically.
Parent process request: a parent process may wish to suspend execution of a descendent to examine
or modify the suspended process.
4|P age
2.2 Thread
A thread is a flow of execution through the process code, with its own program counter that
keeps track of which instruction to execute next, system registers which hold its current
working variables, and a stack which contains the execution history. A thread shares with its
peer threads few information like code segment, data segment and open files. When one thread
alters a code segment memory item, all other threads see that.
A thread is also called a lightweight process. Threads provide a way to improve application
performance through parallelism. Threads represent a software approach to improving
performance of operating system by reducing the overhead thread is equivalent to a classical
process. Each thread belongs to exactly one process and no thread can exist outside a process.
Each thread represents a separate flow of control. Threads have been successfully used in
implementing network servers and web server. They also provide a suitable foundation for
parallel execution of applications on shared memory multiprocessors. The following figure
shows the working of a single-threaded and a multithreaded process.
5|P age
2.2.1 Difference between Process and Thread
No Process Thread
1 Process is heavy weight or resource intensive Thread is light weight, taking lesser resources
than a process.
2 Process switching needs interaction with operating Thread switching does not need to interact
system. with operating system.
3 In multiple processing environments, each process All threads can share same set of open files,
executes the same code but has its own memory and file child processes.
resources.
4 If one process is blocked, then no other process can While one thread is blocked and waiting, a
execute until the first process is unblocked. second thread in the same task can run.
5 Multiple processes without using threads use more Multiple threaded processes use fewer
resources. resources.
6 In multiple processes each process operates independently One thread can read, write or change another
of the others. thread's data.
6|P age
2.2.4 User Level Threads
In this case, the thread management kernel is not aware of the existence of threads. The thread
library contains code for creating and destroying threads, for passing message and data between
threads, for scheduling thread execution and for saving and restoring thread contexts. The
application starts with a single thread.
Advantages
Thread switching does not require Kernel mode privileges.
7|P age
Advantages
Kernel can simultaneously schedule multiple threads from the same process on multiple
processes.
If one thread in a process is blocked, the Kernel can schedule another thread of the same
process.
Kernel routines themselves can be multithreaded.
Disadvantages
Kernel threads are generally slower to create and manage than the user threads.
Transfer of control from one thread to another within the same process requires a mode
switch to the Kernel.
2.2.6 Difference between User-Level & Kernel-Level Thread
No User-Level Threads Kernel-Level Thread
1 User-level threads are faster to create and Kernel-level threads are slower to create and
manage. manage.
2 Implementation is by a thread library at the Operating system supports creation of
user level. Kernel threads.
3 User-level thread is generic and can run on Kernel-level thread is specific to the
any operating system. operating system.
4 Multi-threaded applications cannot take Kernel routines themselves can be
advantage of multiprocessing. multithreaded.
8|P age
parallel on a multiprocessor machine. This model provides the best accuracy on concurrency and
when a thread performs a blocking system call, the kernel can schedule another thread for
execution.
9|P age
2.3.2 Inter Process Communication (IPC)
The concurrent processes executing in the operating system may be either independent or cooperating
processes. Hence, a process can be of two types:
Independent process.
Co-operating process.
Independent processes: A process is independent if it cannot affect or be affected by the
processes executing in the system. i.e., any process that does not share any data with any other
process is independent.
Cooperating processes: A process is cooperating if it can affect or be affected by the other
processes executing in the system. i.e., any process that shares data with other processes is a
cooperating process.
An independent process is not affected by the execution of other processes while a co-
operating process can be affected by other executing processes. Though one can think that those
processes, which are running independently, will execute very efficiently but in practical, there
are many situations when co-operative nature can be utilized for increasing computational speed,
convenience and modularity.
An environment that allows process cooperation for several reasons:
Information sharing: Since several users may be interested in the same piece of
information (for instance, a shared file), we must provide an environment to allow
concurrent access to these types of resources.
Computation speedup: If we want a particular task to run faster, we must break it into
subtasks, each of which will be executing in parallel with the others. Such a speedup can
be achieved only if the computer has multiple processing elements (such as CPUS or I/O
channels).
Modularity: We may want to construct the system in a modular fashion, dividing the
10 | P a g e
system functions into separate processes or threads.
Convenience: Even an individual user may have many tasks on which to work at one
time. For instance, a user may be editing, printing, and compiling in parallel. Cooperating
processes requires mechanisms that allow processes to communicate with one another
and to synchronize their actions.
Inter process communication (IPC) is a mechanism which allows processes to communicate
each other and synchronize their actions. The communication between these processes can be
seen as a method of co-operation between them. Processes can communicate with each other
using the following two basic ways:
Shared Memory
Message passing
12 | P a g e
Ready queue CPU
Time slice
expired
Wait for an
Interrupt
interrupt
occurs
A process continues this cycle until it terminates, at which time it is removed from all queues and has
its PCB and resources deallocated.
Schedulers: Scheduling in a system is done by the aptly named scheduler. A process migrates between the
various scheduling queues throughout its lifetime. The operating system must select process from the queues
in some fashion. The selection process is carried out by the appropriate scheduler.
Scheduler is mainly concerned with three things:
Throughput, or how fast it can finish a certain number of tasks from beginning to end per unit of
time
Latency, which is the turnaround time or the time it takes to finish the task from the time of
request or submission until finish, which includes the waiting time before it could be served
Response time, which is the time it takes for the process or request to be served, in short the
waiting time
Scheduling is largely based on the factors mentioned above and varies depending on the system
and the programming of the system's or user's preferences and objectives. In modern computers
such as PCs with large amounts of processing power and other resources and with the ability to
multitask by running multiple threads or pipelines at once, scheduling is no longer a big issue
and most times processes and applications are given free reign with extra resources, but the
scheduler is still hard at work managing requests.
There are two types of schedulers
13 | P a g e
Long-term scheduler
Short-term scheduler
Mid-term scheduler
Long-term scheduler: In a batch system, there are often more processes submitted than can be executed
immediately. These processes are spooled to a mass-storage device (disk), where they are kept for later
execution. The long-term scheduler select processes form this pool and load them into memory for
execution.
Short-term scheduler: The short-term scheduler (or CPU scheduler) selects among the processes that are
ready to execute, and allocates the CPU to one of them.
The primary distinction between these two schedulers is the frequency of their execution.
The long-term scheduler, on the other hand, executes much less frequently.
The long-term scheduler controls the degree of multiprogramming (number of processes in memory).
The long-term scheduler makes a careful selection. In general most processes can be described as
either I/O bound or CPU bound.
An I/O-bound process is one that spends more of its time doing I/O than it spends doing computations.
A CPU-bound process is one that generates I/O requests very infrequently, using more of its time
doing computation than an I/O-bound process uses.
It is important that the long-term scheduler select a good process mix of I/O-bound and CPU-bound
processes.
If all processes are I/O-bound processes, the ready queue will almost always be empty, and the short-
term scheduler will have little to do.
If all processes are CPU-bound prcesses, the waiting queue will almost always be empty, devices will
go unused, and again the system will be unbalanced.
The short term scheduler must select a new process for the CPU quite frequently. A process may
execute for only a few milliseconds before waiting for an I/O request. Often, the short-term scheduler
executes at least once every 100 milliseconds.
Because of the short duration of time between executions, the short-term scheduler must be very fast.
Mid-term scheduler: The mid-term scheduler is diagrammed below. The key idea in this scheduling is that
sometimes it can be advantageous to remove processes from memory (and from active contention for the
CPU), and thus to reduce the degree of programming. At some later time, the process can be reintroduced
into memory and its execution can be continued where it left off. This scheme is called swapping. The
process is swapped out and swapped in later by the medium-term scheduler.
Swapping may be necessary to improve the process mix, or because a change in memory requirements
has overcommitted available memory, requiring memory, requiring memory to be freed up.
Swap in swap out
Partially executed swapped out processes
CPU
Ready queue
end
14 | P a g e
I/O waiting queues
I/O
Long-term scheduling: The decision to add to the pool of processes to be executed.
Medium-term scheduling: The decision to add to the number of processes that are partially or fully in main
memory
Short-term scheduling: The decision as to which available process will be executed by the processor.
new
LTS
STS
ready running
MTS
waiting
15 | P a g e
4. When a process terminates.
In case of 1 and 4 there is no choice in terms of scheduling, but there is a choice for 2 and 3.
When scheduling takes place only under circumstances 1 and 4, we say the scheduling scheme is nonpreemptive;
otherwise, the scheduling scheme is preemptive.
Under nonpreemptive scheduling, once the CPU has been allocated to a process, the process keeps CPU until it
releases the CPU either by terminating or by switching to the waiting state.
Dispatcher:
Dispatcher is the module that gives control of the CPU to the process selected by the short-term scheduler.
This function involves:
Switching context
Switching to user mode
Jumping to the proper location in the user program to restart that program
The dispatcher should be as fact as possible, given that it is invoked during every process switch.
The time it takes for the dispatcher stop one process and start another running is known as the dispatch latency
2.3.5 Scheduling Criteria
Different CPU scheduling algorithms have different properties and may favor one class of processes over another. In
choosing which algorithm to use in a particular situation, we must consider the properties of the various algorithms.
Different criteria have been suggested, given below, for comparing CPU scheduling algorithms.
This is the simplest scheduling algorithm. In this scheme, the process that requests the CPU first is allocated
the CPU first.
16 | P a g e
This FCFS algorithm is implemented using FIFO queue.
When a process enters the ready queue, its PCB is linked onto the tail of the queue. When the CPU is free, it is
allocated to the process at the head of the queue.
The code for the FCFS scheduling is simple to write and understand.
The average waiting time under the FCFS policy is often quite long.
FCFS is a non-preemptive. Once the CPU has been allocated to a process, that process keeps the CPU until it
releases the CPU, either by terminating or by requesting I/O.
FCFS algorithm is particularly troublesome for time-sharing systems, where it is important that each user get a
share of the CPU at regular intervals.
Example1:
1. Consider the following set of processes that arrive at time 0, with the length of the CPU-burst time given in
milliseconds.
Process : P1 P2 P3
Burst time : 24 3 3
If the process arrives in the order P1, P2, P3 and are served in FCFS order then waiting time, turnaround time, and
Response times are given below.
Gantt chart is P1 P2 P3
0 24 27 30
If the process arrives in the order P2, P3, P1 and are served in FCFS order then waiting time, turnaround time, and
Response times are given below.
Gantt chart is P2 P3 P1
0 3 6 30
17 | P a g e
The average time under FCFS policy in general is not minimal, and may vary substantially if the process CPU-
burst times vary greatly.
Example 2. For the following data perform the same as in previous problem.
Process : P1 P2 P3 P4 P5
Arrival time : 0 2 4 6 8
Burst time : 3 6 4 5 2
Gantt chart:
P1 P2 P3 P4 P5
0 3 9 13 18 20
This algorithm associates with each process the length of the latter’s next CPU burst. When the CPU is
available, it is assigned to the process that has the smallest next CPU burst. If two processes have the same
length next CPU burst, FCFS scheduling is used to break the tie.
Example1: consider the following set of processes, with the length of the CPU burst time given in milliseconds.
Process : P1 P2 P3 P4
Arrival time : 0 0 0 0
Burst time : 6 8 7 3
Find the average waiting time, response time and show the schedule using Gantt chart.
Gantt chart:
P4 P1 P3 P2
0 3 9 16 24
Example3: consider the following set of processes, with the length of the CPU burst time given in milliseconds.
Process : P1 P2 P3 P4
Arrival time : 0 1 2 3
Burst time : 8 4 9 5
Find the average waiting time and response times using Gantt charts.
Gantt chart:
P1 P2 P4 P1 P3
0 1 5 10 17 26
Process Arrival Burst Response Waiting Turnaround
time Time time time time
P1 0 8 0 9
P2 1 4 0 0
P3 2 9 15 15
P4 3 5 2 2
26
19 | P a g e
In this example, P1 is preempted because the next arrived process P2 has 4 milliseconds of
burst time, which is less than the remaining time for process P1.
A non-preemptive SJF scheduling would result in an average time of 7.75 milliseconds.
The SJF scheduling algorithm is provably Optimal, in that it gives the minimum average waiting time for a
given set of processes.
Although the SJF algorithm is optimal, it cannot be implemented at the level of shortest-term CPU
scheduling. There is no way to know the length of the next CPU burst.
One approach is to try to approximate SJF scheduling. We may not know the length of the next CPU burst; we
may able to predict its value. Thus, by computing an approximation of the length of the next CPU burst, we
can pick the process with the shortest predicted CPU burst.
The SJF algorithm is either preemptive or nonpreemptive. Preemptive SJF scheduling is called shortest-
remaining-time-first scheduling.
3. Priority Scheduling
In priority scheduling algorithm, a priority is associated with each process, and the CPU is allocated to the
process with the highest priority. Equal priority processes are scheduled in FCFS order.
Priorities are generally some fixed range of numbers, such as 0to7, or 0 to 1023. However, there is no general
agreement on whether 0 is the highest or lowest priority. Some systems use low numbers to represent low
priority; others use low numbers for high priority.
In this algorithm we assume that low numbers represent high priority.
Example1: consider the following set of processes, assumed to have arrived at time 0, in the order P1, P2, P3, P4, P5,
with the length of the CPU-burst time given in milliseconds. Find the average waiting time and turnaround time using
Gantt charts.
Process : P1 P2 P3 P4 P5
Burst time : 10 1 2 1 5
Priority : 3 1 3 4 2
Sol:
Gantt chart: P2 P5 P1 P3 P4
0 1 6 16 18 19
Example2: consider the following set of processes, in the order P1, P2, P3, P4, with the length of the CPU-burst time
given in milliseconds and Priorities. Find the average waiting time and turn around time using Gantt charts.
20 | P a g e
Process : P1 P2 P3 P4
Arrival time : 0 2 4 6
Burst time : 8 4 9 5
Priority : 3 1 4 2
Gantt chart:
P1 P2 P4 P1 P3
0 2 6 11 17 26
P1 P2 P3 P4 P5 P1 P5 P1 P1
0 3 4 6 7 10 13 15 18 19
Process : P1 P2 P3
Arrival time : 0.0 0.4 1.0
Burst time : 8 4 1
1. What is the average turnaround time for these processes with the FCFS scheduling algorithm?
2. What is the average turnaround time for these processes with the SJF scheduling algorithm?
3. The SJF algorithm is supposed to improve performance, but notice that we choose to run process P1 at time 0
because we did not know that two shorter processes would arrive soon. Compute what the average turnaround
time will be if the CPU is the idle for the first 1 unit and then SJF scheduling is used. Remember that
processes P1 and P2 are waiting during this idle time, so their waiting time may increase, this algorithm could
be known as future-knowledge scheduling.
22 | P a g e
23 | P a g e