0% found this document useful (0 votes)
11 views16 pages

OS - Unit 3

Uploaded by

snehatumaskar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
11 views16 pages

OS - Unit 3

Uploaded by

snehatumaskar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 16

UNIT 3

PROCESS MANAGEMENT

Planning and administering the activities – design, control, and improvement – necessary to
achieve a high level of performance.

OS must allocate resources to processes,enable them to share and exchange information


among them,protect the resources of each process from other and enable synchronization
among them.

For this OS must maintain a data structure for each process,which describes the state and
resource ownership of each process.

Process :
A process is a program in execution. A process is more than the program code, which is
sometimes known as the text section. It also includes the current activity, as represented by
the value of the program counter and the contents of the processor’s registers. A process
generally also includes the process stack, which contains temporary data (such as function
parameters, return addresses, and local variables), and a data section, which contains global
variables. A process may also include a heap(mass), which is memory that is dynamically
allocated during process run time.

We emphasize(highlight) that a program by itself is not a process; a program is a passive


entity, such as a file containing a list of instructions stored on disk (often called an
executable file), whereas a process is an active entity, with a program counter specifying
the next instruction to execute and a set of associated resources. A program becomes a
process when an executable file is loaded into memory. Two common techniques for
loading executable files are double-clicking an icon representing the executable file and
entering the name of the executable file on the command line (as in prog.exe or a.out).

Although two processes may be associated with the same program, they are nevertheless
considered two separate execution sequences. For instance, several users may be running
different copies of the mail program, or the same user may invoke many copies of the Web
browser program. Each of these is a separate process, and although the text sections are
equivalent, the data, heap, and stack sections vary.

Process State:
As a process executes, it changes state. The state of a process is defined in part by the
current activity of that process. Each process may be in one of the following states:

1. New: The process is being created.


2. Ready: The process is waiting to be assigned to a processor.
3. Running: Instructions are being executed.
4. Waiting: The process is waiting for some event to occur (such as an I/O completion
or reception of a signal).
5. Terminated: The process has finished execution.

Process Control Block :


Process Control Block is a data structure that contains information of the process related to it.
The process control block is also known as a task control block, entry of the process table,
etc.

It is very important for process management as the data structuring for processes is done in
terms of the PCB. It also defines the current state of the operating system.

Structure of the Process Control Block


The process control stores many data items that are needed for efficient process management.
Some of these data items are explained with the help of the given diagram −
The following are the data items −

Process State

This specifies the process current state i.e. new, ready, running, waiting or terminated.

Process Number

This shows the number of the particular process by which it is identified.

Program Counter

This contains the address of the next instruction that needs to be executed in the process.

Registers

This specifies the registers that are used by the process. They may include accumulators,
index registers, stack pointers, general purpose registers etc.
List of Open Files

These are the different files that are associated with the process

CPU Scheduling Information

The process priority, pointers to scheduling queues etc. is the CPU scheduling information
that is contained in the PCB. This may also include any other scheduling parameters.

Memory Management Information

The memory management information includes the page tables or the segment tables
depending on the memory system used. It also contains the value of the base registers, limit
registers etc.

I/O Status Information

This information includes the list of I/O devices used by the process, the list of files etc.

Accounting information

The time limits, account numbers, amount of CPU used, process numbers etc. are all a part of
the PCB accounting information.

Location of the Process Control Block

The process control block is kept in a memory area that is protected from the normal user
access. This is done because it contains important process information. Some of the operating
systems place the PCB at the beginning of the kernel stack for the process as it is a safe
location.

Process Scheduling:
The process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a
particular strategy.
Process scheduling is an essential part of a Multiprogramming operating systems. Such
operating systems allow more than one process to be loaded into the executable memory at a
time and the loaded process shares the CPU using time multiplexing.

The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a separate
queue for each of the process states and PCBs of all processes in the same execution state
are placed in the same queue. When the state of a process is changed, its PCB is unlinked
from its current queue and moved to its new state queue.
The Operating System maintains the following important process scheduling queues −
 Job queue − This queue keeps all the processes in the system.
 Ready queue − This queue keeps a set of all processes residing in main memory,
ready and waiting to execute. A new process is always put in this queue.
 Device queues − The processes which are blocked due to unavailability of an I/O
device constitute this queue.
The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.)

Schedulers :
A process migrates among the various scheduling queues throughout its lifetime. The
operating system must select, for scheduling purposes, processes from these queues in
some fashion. The selection process is carried out by the appropriate scheduler.

Schedular task is to select the jobs to be submitted into the system and to decide which
process to run.

Long Term Scheduler :


It is also called a job scheduler. A long-term scheduler determines which programs are
admitted to the system for processing. It selects processes from the queue and loads them
into memory for execution. Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as
I/O bound and processor bound. It also controls the degree of multiprogramming. If the
degree of multiprogramming is stable, then the average rate of process creation must be
equal to the average departure rate of processes leaving the system.

Short Term Scheduler :


It is also called as CPU scheduler. Its main objective is to increase system performance in
accordance with the chosen set of criteria. It is the change of ready state to running state
of the process. CPU scheduler selects a process among the processes that are ready to
execute and allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which process to
execute next. Short-term schedulers are faster than long-term schedulers.

Medium Term Scheduler :


Medium-term scheduling is a part of swapping. It removes the processes from the memory.
It reduces the degree of multiprogramming. The medium-term scheduler is in-charge of
handling the swapped out-processes.
A running process may become suspended if it makes an I/O request. A suspended
processes cannot make any progress towards completion. In this condition, to remove the
process from memory and make space for other processes, the suspended process is moved
to the secondary storage. This process is called swapping, and the process is said to be
swapped out or rolled out. Swapping may be necessary to improve the process mix.

Sr.No. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler

1 It is a job scheduler It is a CPU scheduler It is a process swapping


scheduler.

2 Speed is lesser than short Speed is fastest among Speed is in between both
term scheduler other two short and long term
scheduler.

3 It controls the degree of It provides lesser control It reduces the degree of


multiprogramming over degree of multiprogramming.
multiprogramming

4 It is almost absent or It is also minimal in time It is a part of Time sharing


minimal in time sharing sharing system systems.
system

5 It selects processes from It selects those It can re-introduce the


pool and loads them into processes which are process into memory and
memory for execution ready to execute execution can be continued.
Context Switch :
Switching the CPU to another process requires performing a state save of the current
process and a state restore of a different process. This task is known as a context switch.
When a context switch occurs, the kernel saves the context of the old process in its PCB and
loads the saved context of the new process scheduled to run.
Using this technique, a context switcher enables multiple processes to share a single CPU.
Context switching is an essential part of a multitasking operating system features.
When the scheduler switches the CPU from executing one process to execute another, the
state from the current running process is stored into the process control block. After this, the
state for the process to run next is loaded from its own PCB and used to set the PC, registers,
etc. At that point, the second process can start executing.

Context switches are computationally intensive since register and memory state must be
saved and restored.

Context-switch time is pure overhead, because the system does no useful work while
switching. Context-switching speed varies from machine to machine, depending on the
memory speed, the number of registers that must be copied, and the existence of special
instructions (such as a single instruction to load or store all registers). Typical speeds are a
few milliseconds.
Interprocess Commincation (IPC) :
Processes executing concurrently in the operating system might be either independent
processes or cooperating processes. A process is independent if it cannot be affected by the
other processes executing in the system.

Working together with multiple processes, require an interprocess communication (IPC)


method which will allow them to exchange data along with various information.

Basics of Interprocess Communication (IPC) :

There are numerous reasons for providing an environment or situation which allows process
co-operation:

 Information sharing: Since some users may be interested in the same piece of information
(for example, a shared file), you must provide a situation for allowing concurrent access to
that information.
 Computation speedup: If you want a particular work to run fast, you must break it into
sub-tasks where each of them will get executed in parallel with the other tasks. Note that
such a speed-up can be attained only when the computer has compound or various
processing elements like CPUs or I/O channels.
 Modularity: You may want to build the system in a modular way by dividing the system
functions into split processes or threads.
 Convenience: Even a single user may work on many tasks at a time. For example, a user
may be editing, formatting, printing, and compiling in parallel.

Shared memory :
In the shared-memory model, a region of memory which is shared by cooperating processes
gets established. Processes can be then able to exchange information by reading and writing all
the data to the shared region

One process will create an area in RAM which other processes can access

Normally the OS prevents processes from accessing the memory of another process, but the
Shared Memory features in the OS can allow data to be shared. Since both processes can access
the shared memory area like regular working memory, this is a very fast way of communication
Message Passing :
In the message-passing form, communication takes place by way of messages exchanged
among the cooperating processes.

1. Establish a communication link


2. Start exchanging messages using basic primitives.
We need at least two primitives:
– send(message, destinaion) or send(message)
– receive(message, host) or receive(message)

The message size can be of fixed size or of variable size. if it is of fixed size, it is easy for OS
designer but complicated for programmer and if it is of variable size then it is easy for
programmer but complicated for the OS designer.

A standard message can have two parts: header and body.


The header part is used for storing Message type, destination id, source id, message length
and control information. The control information contains information like what to do if runs
out of buffer space, sequence number, priority.
What is Thread?
A thread is a flow of execution through the process code, with its own program counter that
keeps track of which instruction to execute next, system registers which hold its current
working variables, and a stack which contains the execution history.

OR

Thread is an execution unit which consists of its own program counter, a stack, and a set of
registers.

A thread is the basic unit to which the operating system allocates processor time.

A thread shares with its peer threads few information like code segment, data segment and
open files. When one thread alters a code segment memory item, all other threads see that.

A thread is also called a lightweight process. Threads provide a way to improve application
performance through parallelism. Threads represent a software approach to improving
performance of operating system by reducing the overhead thread is equivalent to a classical
process.

Each thread belongs to exactly one process and no thread can exist outside a process. Each
thread represents a separate flow of control. Threads have been successfully used in
implementing network servers and web server. They also provide a suitable foundation for
parallel execution of applications on shared memory multiprocessors. The following figure
shows the working of a single-threaded and a multithreaded process.
Difference between Process and Thread
S.N. Process Thread

Process is heavy weight or resource Thread is light weight, taking lesser resources
1
intensive. than a process.

Process switching needs interaction with Thread switching does not need to interact
2
operating system. with operating system.

In multiple processing environments,


All threads can share same set of open files,
3 each process executes the same code but
child processes.
has its own memory and file resources.

If one process is blocked, then no other


While one thread is blocked and waiting, a
4 process can execute until the first process
second thread in the same task can run.
is unblocked.

Multiple processes without using threads Multiple threaded processes use fewer
5
use more resources. resources.

In multiple processes each process One thread can read, write or change another
6
operates independently of the others. thread's data.

Advantages of Thread
 Responsiveness Threads minimize the context switching time.
 Resource sharing Use of threads provides concurrency within a process.
 Efficient communication.
 It is more economical to create and context switch threads.
 Threads allow utilization of multiprocessor architectures to a greater scale and
efficiency.

Types of Thread
Threads are implemented in following two ways −

 User Level Threads − User managed threads.


 Kernel Level Threads − Operating System managed threads acting on kernel, an
operating system core.
User Level Threads
In this case, the thread management kernel is not aware of the existence of threads. The
thread library contains code for creating and destroying threads, for passing message and data
between threads, for scheduling thread execution and for saving and restoring thread
contexts. The application starts with a single thread.

Advantages

 Thread switching does not require Kernel mode privileges.


 User level thread can run on any operating system.
 Scheduling can be application specific in the user level thread.
 User level threads are fast to create and manage.

Disadvantages

 In a typical operating system, most system calls are blocking.


 Multithreaded application cannot take advantage of multiprocessing.
Kernel Level Threads
 In this case, thread management is done by the Kernel. There is no thread
management code in the application area. Kernel threads are supported directly by the
operating system. Any application can be programmed to be multithreaded. All of the
threads within an application are supported within a single process.
 The Kernel maintains context information for the process as a whole and for
individuals threads within the process. Scheduling by the Kernel is done on a thread
basis. The Kernel performs thread creation, scheduling and management in Kernel
space. Kernel threads are generally slower to create and manage than the user threads.

Advantages

 Kernel can simultaneously schedule multiple threads from the same process on
multiple processes.
 If one thread in a process is blocked, the Kernel can schedule another thread of the
same process.
 Kernel routines themselves can be multithreaded.

Disadvantages

 Kernel threads are generally slower to create and manage than the user threads.
 Transfer of control from one thread to another within the same process requires a
mode switch to the Kernel.

Multithreading Models
Some operating system provide a combined user level thread and Kernel level thread facility.
Solaris is a good example of this combined approach. In a combined system, multiple threads
within the same application can run in parallel on multiple processors and a blocking system
call need not block the entire process. Multithreading models are three types

 Many to many relationship.


 Many to one relationship.
 One to one relationship.

Many to Many Model


The many-to-many model multiplexes any number of user threads onto an equal or smaller
number of kernel threads.

The following diagram shows the many-to-many threading model where 6 user level threads
are multiplexing with 6 kernel level threads. In this model, developers can create as many
user threads as necessary and the corresponding Kernel threads can run in parallel on a
multiprocessor machine. This model provides the best accuracy on concurrency and when a
thread performs a blocking system call, the kernel can schedule another thread for execution.
Many to One Model
Many-to-one model maps many user level threads to one Kernel-level thread. Thread
management is done in user space by the thread library. When thread makes a blocking
system call, the entire process will be blocked. Only one thread can access the Kernel at a
time, so multiple threads are unable to run in parallel on multiprocessors.

If the user-level thread libraries are implemented in the operating system in such a way that
the system does not support them, then the Kernel threads use the many-to-one relationship
modes.
One to One Model
There is one-to-one relationship of user-level thread to the kernel-level thread. This model
provides more concurrency than the many-to-one model. It also allows another thread to run
when a thread makes a blocking system call. It supports multiple threads to execute in
parallel on microprocessors.

Disadvantage of this model is that creating user thread requires the corresponding Kernel
thread. OS/2, windows NT and windows 2000 use one to one relationship model.
Difference between User-Level & Kernel-Level Thread
S.N. User-Level Threads Kernel-Level Thread

User-level threads are faster to create and Kernel-level threads are slower to create
1
manage. and manage.

Implementation is by a thread library at the Operating system supports creation of


2
user level. Kernel threads.

User-level thread is generic and can run on Kernel-level thread is specific to the
3
any operating system. operating system.

Multi-threaded applications cannot take Kernel routines themselves can be


4
advantage of multiprocessing. multithreaded.

You might also like