Osy Chapter 03

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

CHAPTER NO 03:-PROCESS MANAGEMENT

Process
 A running instance of a program is called as a process
 A process is defined as “an entity which represents the basic unit of work to
be implemented in the system” OR “a program under execution, which
competed for the CPU time and other resources”

Process States:
 In a Multiprogramming system, many processes are executed by the OS. But
at any instance of time, only one process executes at a time other processes
will wait for their turn
 The current activity of a process is known as its state. As the process
executes its changes the states
 The following figure shows the process state diagram it represents the
different states in which a process can be at different time,along with the
transitions from one state to another that are possible in OS.

 New state: A process that has just been created but has not been admitted to
the pool of execution processes by the OS.Every new operation which is
requested to the system is known as the new born process.
 Ready state: When the process is ready to execute but it is waiting for the
CPU to execute then it is called as the ready state.The processes which are
ready for the execution and reside in the main memory are called ready state
processes.
 Running state:The process that is currently being executed is present in the
running state. One of the processes from the ready state will be chosen by
the OS depending upon the scheduling algorithm. Hence, if we have only one
CPU in our system, the number of running processes for a particular time
will always be one. If we have n processors in the system then we can have n
processes running simultaneously.
 Waiting or Blocked:When a process waits for a certain resource to be
assigned or for the input from the user then the OS move this process to the
block or wait state and assigns the CPU to the other processes.
 Terminated state:The OS moves a process from running state to terminated
state if the process finishes execution or if it aborts.Whenever the execution
of a process is completed in running state, it will exit to terminate state,
which is the completion of process.

Process Control Block:


 All processes are represented in the operating system by a task control block
or a process control block.
 A PCB stores descriptive information related to a process which includes its
state, program counter, memory management information, allocated
resources, accounting information,etc that is required to control and manage
a particular process.

1. Process state: The state may be new, ready and halted and running, and
waiting and so on.

2. Program counter: The counter shows the address of the next instruction to
be executed for such process.

3. CPU registers: The registers vary in number and category, depending upon
the computer architecture. They contain accumulators, stack pointers, index
registers and general- purpose registers, plus some condition-code
information.

4. CPU-scheduling information: This information contains a process priority,


pointers to scheduling queues and other scheduling parameters.

5. Memory-management information: This information may contain such


information as the value of the base and limit registers and the page tables,
or else the segment tables, depending upon the memory system utilized by
the OS.

6. Accounting information: This information contains the amount of CPU and


real time utilized, and time limits, job or process numbers, account numbers
and so on.

7. I/O status information: The information contains the list of I/O devices
allocated to such process, a list of open files etc
Operation on Processes:
Process creation:
Processes need to be created in the system for different operations. This can be
done by the following events −

 User request for process creation


 System initialization
 Execution of a process creation system call by a running process
 Batch job initialization
A process may be created by another process using fork(). The creating process
is called the parent process and the created process is the child process. A child
process can have only one parent but a parent process may have many children.
Both the parent and child processes have the same memory image, open files,
and environment strings. However, they have distinct address spaces.
Process termination:
 Process termination when it finishes its execution by executing the final
statement and request to the operating system delete it by using exit()
system call.
 At this point the process may return output data to its parent process via the
wait system call.
 All the resources of the process including open files, physical and virtual
memory and I/O operations are deallocated by operating system return back.
 Termination of process can be occurring in some additional circumstances,
using a specific system call such as abort a process can cause the termination
of another process.
 For this, the process invokes appropriate system ca;; TerminateProcess() in
windows and kill() in UNIX

Process Scheduling Queues


Scheduling Queues
The OS maintains all Process Control Blocks (PCBs) in Process Scheduling
Queues. The OS maintains a separate queue for each of the process states and
PCBs of all processes in the same execution state are placed in the same queue.
Job queue − This queue keeps all the processes in the system.
Ready queue − This queue keeps a set of all processes residing in main memory,
ready and waiting to execute. A new process is always put in this queue.
Device queue/IO queue− The processes which are blocked due to unavailability
of an I/O device constitute this queue.

Schedulers:

 Schedulers are special system software’s which handles process scheduling


in various ways.Their main task is to select the jobs to be submitted into the
system and to decide which process to run.
There are mainly three types of Process Schedulers:

1. Long Term Scheduler


2. Short Term Scheduler
3. Medium Term Scheduler
 Long Term Scheduler:
 The long term scheduler determines which jobs are admitted to the system
for processing.
 The long term scheduler select jobs from the job pool and loads into memory
for execution. It is also called as job scheduler or admission scheduler.
 The long-term scheduler is in charge of allocating resources such as
processor time and memory to processes based on their needs and
priorities.
 It also determines the order in which processes are executed and manages
the execution of processes that may take a long time to complete, such as
batch jobs or background tasks.
Medium Term Scheduler
 The long-term execution of processes in a computer system is managed by
a medium-term scheduler, also referred to as a mid-term scheduler.
 Based on a set of predetermined criteria and priorities, this kind of
scheduler decides which processes should be executed next.
 Typically, processes that are blocked or waiting must be managed by the
medium-term scheduler.
 It is in charge of controlling the system’s overall resource utilization as
well as the scheduling of programs that are stalled or waiting.
Short Term Scheduler
 The short term scheduler selects process from memories which are ready to
execute and allocates the CPU to one of them.It is also called as CPU
scheduler or process scheduler
 The short-term scheduler’s major objective is to make sure that the CPU is
constantly utilized effectively and efficiently.
 The scheduler chooses a process from the ready queue when it is prepared
to run and allows the CPU to do it. The process then continues to operate
until it either completes its work or runs into an I/O activity that blocks it.
Context switching:

 Switching the CPU from one process to another process requires saving the
state of the old process and loading the save state for the new process.This
task is known as context switch
 Context switching enables all processes to share a single CPU to finish their
execution and store the status of the system’s tasks.
1. One process does not directly switch to another within the
system. Context switching makes it easier for the operating system to use
the CPU’s resources to carry out its tasks and store its context while
switching between multiple processes.
 When switching perform in the system, it stores the old running process's
status in the form of registers and assigns the CPU to a new process to
execute its tasks.
 While a new process is running in the system, the previous process must
wait in a ready queue. The execution of the old process starts at that point
where another process stopped it.
Example of Context Switching

Suppose that multiple processes are stored in a Process Control Block (PCB). One
process is running state to execute its task with the use of CPUs. As the process is
running, another process arrives in the ready queue, which has a high priority of
completing its task using CPU. Here we used context switching that switches the
current process with the new process requiring the CPU to finish its tasks. While
switching the process, a context switch saves the status of the old process in
registers. When the process reloads into the CPU, it starts the execution of the
process when the new process stops the old process. If we do not save the state
of the process, we have to start its execution at the initial level. In this way,
context switching helps the operating system to switch between the processes,
store or reload the process when it requires executing its tasks.

Inter-process Communication(IPC)
 Process executing concurrently in the operating system might be either
independent processes or co-operating processes.
 Inter-process communication (IPC) is a set of programming interfaces that
allow a programmer to coordinate activities among different program
processes that can run concurrently in an operating system.
 Since even a single user request may result in multiple processes running in
the operating system on the user’s behalf, the processes need to
communicate with each other.The IPC make this possible.
 The IPC is a set of techniques for the exchange of data among multiple
processes
 IPC enables one application to control another application, and for several
applications to share the same data without interfering with one another.
 Purpose of IPC: Data transfer,sharing data,event notification, Resources
sharing and synchronization and process control

Two fundamental models allow inter-process communication


1. Shared memory Model:Two processes exchange information through
sharing region.They can read and write date from and to this region
2. Message Passing Model: In message passing model the data or information is
exchanged in the form of messages.

Shared memory Model


 IPC uses shared memory requires a region of shared memory among the
communicating processes, Processes can than exchange the information by
reading and writing data to the shared memory.
 A shared-memory region resides in the address space of the process creating
the shared memory segment.Other processes that wish to communicate
using this shared memory segment must attach it to their address space.
 There are two processes: Producer and Consumer. The producer produces
some items and the Consumer consumes that item.
 The two processes share a common space or memory location known as a
buffer where the item produced by the Producer is stored and from which
the Consumer consumes the item if needed.
 Two types of buffer can be used:
 1.Unbounded Buffer: in which the Producer can keep on producing items
and there is no limit on the size of the buffer.
 2.Bounded Buffer:in which the Producer can produce up to a certain
number of items before it starts waiting for Consumer to consume it.

Message Passing Model

 Message passing provides a mechanism to allow processes to communicate


and to synchronize their actions without sharing the same address space.
 It is used in distributed environments where the communicating processes
are present on remote machines which are connected with the help of a
network.
 An IPC facility provides the two operations: send (message) and recevie
(message)
 If process A and B want to communicate they must send messages to and
receive messages from each other, a communication link must be established
between them
 Direct Communication:Each process that wants to communicate must
explicitly name the recipient or sender of the communication
Send(A,message):Send a message to process A
Receive(B,message):Receive a message from process B
 In-direct Communication is done via a shared mailbox (port), which
consists of a queue of messages. The sender keeps the message in mailbox
and the receiver picks them up.

Critical Section Problem:


 The critical section refers to the segment of code where processes access
shared resources, such as common variables and files, and perform write
operations on them.
 The critical section problem is used to design a protocol followed by a
group of processes, so that when one process has entered its critical section,
no other process is allowed to execute in its critical section.
 Since processes execute concurrently, any process can be interrupted mid-
execution. In the case of shared resources, partial execution of processes can
lead to data inconsistencies.

 When two processes access and manipulate the shared resource


concurrently, and the resulting execution outcome depends on the order in
which processes access the resource; this is called a race condition.

 Race conditions lead to inconsistent states of data. Therefore, we need a


synchronization protocol that allows processes to cooperate while
manipulating shared resources, which essentially is the critical section
problem.

Solutions to the critical section problem

 Mutual exclusion: When one process is executing in its critical section, no


other process is allowed to execute in its critical section.
 Progress: When no process is executing in its critical section, and there
exists a process that wishes to enter its critical section, it should not have to
wait indefinitely to enter it.
 Bounded waiting: There must be a bound on the number of times a process
is allowed to execute in its critical section, after another process has
requested to enter its critical section and before that request is accepted.
 Semaphores: More sophisticated methods that utilize
the wait() and signal() operations that execute atomically on Semaphore

THREADS

 A thread, sometimes called as light weight process(LWP) is a basic unit of


CPU utilization
 A thread is a single sequential flow of execution of tasks of a process so it is
also known as thread of execution or thread of control.
 If a process has multiple threads it can do more than one task at a time, such
a process is known as multi threaded process.
 The process can be split down into so many threads. For example, in a
browser, many tabs can be viewed as threads. MS Word uses many threads -
formatting text from one thread, processing input from another thread, etc.
Benefits of Thread:

 Responsiveness: Multi threading is an interactive application which allows


a program to continue running if part of it is blocked or performing a lengthy
operation, thereby increasing responsiveness to the user.
 Resource sharing:By default threads share the memory and the resources
of the process to which they belong.The benefit of code sharing is that it
allows an application to have several different threads of activity all within
the same address space.
 Faster Execution: When tasks can be divided into independent parts that
can be executed concurrently, multithreading can lead to faster execution
times.
 Communication: Threads within the same process can easily communicate
with each other, making it easier to develop complex applications that
require coordination among multiple tasks.

User Threads:
 The threads implemented at the user level are known as user threads.In
user level thread, thread management is done by the application; the kernel
is not aware of existence of threads.
 User threads are supported above the kernel and are implemented by a
thread library at the user level.
 User level threads are generally fast to create and manage as there is no
interference of kernel is involved.
 Advantages:
 1.User level thread can run on any operating system
 2.It does not required modification in the OS
 3.User thread library is easy to portable.
 4.User level thread are fast to create and manage

Disadvantages:
 Multithreaded applications in user-level threads cannot use multiprocessing
to their advantage.
 The entire process is blocked if one user-level thread performs blocking
operation.

Kernel Threads:
 The threads implemented at kernel level are known as kernel threads.
 Kernel threads are generally slower to create and manage because thread
maangement is done by OS
Advantages:
 Multiple threads of the same process can be scheduled on different
processors in kernel-level threads.
 If a kernel-level thread is blocked, another thread of the same process can be
scheduled by the kernel.
Disadvantages:
 A mode switch to kernel mode is required to transfer control from one
thread to another in a process.
 Kernel-level threads are slower to create as well as manage as compared to
user-level threads.
Multithreading:
 A multithreaded program contains two or more parts of the program that
can run concurrently.
 Each part of such a program is called a thread and each thread defines a
separate path of excecution.
 Multithreading is a process of executing multiple threads simultaneously.

Multithreading Models:
One-to-One Model:

 The one-to-one model maps each user thread to a kernel thread


 This type of relationship facilitates the running of multiple threads in
parallel.
 The generation of every new user thread must include creating a
corresponding kernel thread causing an overhead, which can hinder the
performance of the parent process.
Advantages:
 More concurrency due to parallel processing
 Less complication in the processing
Disadvantages:
 Every user thread requires creation of kernel thread
 Reduces the performance of system

Many-to-One Model:

 In this model, multiple user threads are mapped to one single kernel thread.
 In this model when a user thread makes a blocking system call entire
process blocks.
 Only one thread can access the kernel at a time, multiple threads are unable
to run in parallel on multiprocessors.
Advantages:
 Totally portable
 One kernel thread controls multiple user threads
Disadvantages:
 Cannot perform parallel processing
 One block call blocks all user threads
Many-to-Many Model:

 In this type of model, there are several user-level threads and several kernel-
level threads.
 The number of kernel threads may be specific to either a particular
application or a particular machine
Advanatages:
 Many threads can be created as per user’s requirement
 Multiple kernel or equal to user threads can be created
Disadvanatges:
 Overhead for Operating system
 Performance and management is less.

You might also like