0% found this document useful (0 votes)
13 views38 pages

3 Process

Uploaded by

Prajwal Kandel
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
Download as ppt, pdf, or txt
0% found this document useful (0 votes)
13 views38 pages

3 Process

Uploaded by

Prajwal Kandel
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1/ 38

Processes

Process Concept
 a process is a program in execution. A process is more than the
program code, which is sometimes known as the text section. It
also includes the current activity, as represented by the value of
the program counter and the contents of the processor's
registers.
 A process generally also includes the process stack, which
contains temporary data (such as function parameters, return
addresses, and local variables),
 and a data section, which contains global variables.
 A process may also include a heap, which is memory that is
dynamically allocated during process run time.
 a program is a passive entity, such as a file containing a list of
instructions stored on disk, whereas a process is an active
entity, with a program counter specifying the next instruction to
execute and a set of associated resources. A program becomes
a process when an executable file is loaded into memory.
Process in Memory
Process State
 As a process executes, it changes state. The state of a
process is defined in part by the current activity of that
process. Each process may be in one of the following states:
 new: The process is being created
 running: Instructions are being executed
 waiting: The process is waiting for some event to occur
 ready: The process is waiting to be assigned to a
processor
 terminated: The process has finished execution
Diagram of Process State
Process Control Block (PCB)

Each process is represented in the operating system


by a process control block (PCB)—also called a
task control block. It contains many pieces of
information associated with a specific process:
Information associated with each process
 Process state
 Program counter
 CPU registers
 CPU scheduling information
 Memory-management information
 Accounting information
 I/O status information
Process Control Block (PCB)
Process Control Block (PCB)

 Process State - e.g. new, ready, running etc.


 Program Counter - address of next instruction to be
executed

 CPU registers - The registers vary in number and type,


depending on the computer architecture. They include
accumulators, index registers, stack pointers, and
general-purpose registers, plus any condition-code
information. Along with the program counter, this state
information must be saved when an interrupt occurs, to
allow the process to be continued correctly afterward

 CPU scheduling information -This information includes


a process priority, pointers to scheduling queues, and
any other scheduling parameters.
Process Control Block (PCB)

 Memory Management information -This information


may include such information as the value of the base
and limit registers, the page tables, or the segment
tables, depending on the memory system used by the
operating system

 Accounting information - This information includes the


amount of CPU time used, time limits, account
numbers, job or process numbers, and so on.

 I/O Status information This information includes the list


of I/O devices allocated to the process, a list of open
files, and so on.
CPU Switch From Process to
Process
Process Scheduling Queues

The objective of multiprogramming is to have some process


running at all times, to maximize CPU utilization. The objective
of time sharing is to switch the CPU among processes so
frequently that users can interact with each program while it
is running. To meet these objectives, the process scheduler
selects an available process (possibly from a set of several
available processes) for program execution on the CPU.

Job queue – As processes enter the system, they are put into
a job queue, which consists of all processes in the system.
Ready queue – set of all processes residing in main memory,
ready and waiting to execute. This queue is generally stored
as a linked list. A ready-queue header contains pointers to the
first and final PCBs in the list. Each PCB includes a pointer
field that points to the next PCB in the ready queue.
Process Scheduling Queues

 Device queues – set of processes waiting for an I/O


device. Suppose the process makes an I/O request to a
shared device, such as a disk. Since there are many
processes in the system, the disk may be busy with the
I/O request of some other process. The process
therefore may have to wait for the disk. The list of
processes waiting for a particular I/O device is called a
device queue. Each device has its own device queue
Ready Queue And Various I/O Device
Queues
Process Scheduling

A new process is initially put in the ready queue. It waits


there untill it is selected for execution, or is dispatched. Once
the process is allocated the CPU and is executing, one of
several events could occur:
The process could issue an I/O request and then be placed in
an I/O queue.
 The process could create a new subprocess and wait for the
subprocess's termination.
The process could be removed forcibly from the CPU, as a
result of an interrupt, and be put back in the ready queue.
In the first two cases, the process eventually switches from
the waiting state to the ready state and is then put back in
the ready queue. A process continues this cycle until it
terminates, at which time it is removed from all queues and
has its PCB and resources deallocated.

perating System Concepts - 7th Edition, Feb 7, 2006 3.14 Silberschatz, Galvin and Gagne
Representation of Process
Scheduling
Schedulers

A process migrates among the various scheduling


queues throughout its lifetime. The operating system
must select, for scheduling purposes, processes
from these queues in some fashion. The selection
process is carried out by the appropriate scheduler.
 Long-term scheduler (or job scheduler) – in a batch
system, more processes are submitted than can be
executed immediately. These processes are spooled
to a mass-storage device (typically a disk), where
they are kept for later execution. The long-term
scheduler, or job scheduler, selects processes from
this pool and loads them into memory for execution.
 Short-term scheduler (or CPU scheduler) – The short-
term scheduler, or CPU scheduler, selects from
among the processes that are ready to execute and
allocates the CPU to one of them.
Schedulers

 The primary distinction between these two


schedulers lies in frequency of execution. The short-
term scheduler must select a new process for the
CPU frequently. A process may execute for only a few
milliseconds before waiting for an I/O request. Often,
the short-term scheduler executes at least once
every 100 milliseconds. Because of the short time
between executions, the short-term scheduler must
be fast
 The long-term scheduler executes much less
frequently; minutes may separate the creation of one
new process and the next. Thus, the long-term
scheduler may need to be invoked only when a
process leaves the system. Because of the longer
interval between executions, the long-term scheduler
can afford to take more time to decide which process
should be selected for execution.
Schedulers

 It is important that the long-term scheduler make a careful


selection. In general, most processes can be described as
either I/O bound or CPU bound.
 An I/O-bound process is one that spends more of its time doing
I/O than it spends doing computations.
 A CPU-bound process, in contrast, generates I/O requests
infrequently, using more of its time doing computations.
 It is important that the long-term scheduler select a good
process mix of I/O-bound and CPU-bound processes.
 If all processes are I/O bound, the ready queue will almost
always be empty, and the short-term scheduler will have little
to do.
 If all processes are CPU bound, the I/O waiting queue will
almost always be empty, devices will go unused, and again the
system will be unbalanced.
 The system with the best performance will thus have a
combination of CPU-bound and I/O-bound processes.
Schedulers

 On some systems, the long-term scheduler may be absent or


minimal.
 For example, time-sharing systems such as UNIX and
Microsoft Windows systems often have no long-term
scheduler but simply put every new process in memory for the
short-term scheduler.
 Medium-term scheduler-The key idea behind a
medium-term scheduler is that sometimes it can be
advantageous to remove processes from memory . Later, the
process can be reintroduced into memory, and its execution
can be continued where it left off.
 This scheme is called swapping.
 The process is swapped out, and is later swapped in, by the
medium-term scheduler. Swapping may be necessary to
improve the process mix or because a change in memory
requirements has overcommitted available memory, requiring
memory to be freed up
Addition of Medium Term
Scheduling
Context Switch
 When an interrupt occurs, the system needs to save the
current context of the process currently running on the
CPU so that it can restore that context when its
processing is done, essentially suspending the process
and then resuming it.
 The context is represented in the PCB of the process; it
includes the value of the CPU registers, the process
state , and memory- management information.
 Generically, we perform a state save of the current state
of the CPU, be it in kernel or user mode, and then a state
restore to resume operations.
 Switching the CPU to another process requires performing
a state save of the current process and a state restore of
a different process. This task is known as a context
switch.
 When a context switch occurs, the kernel saves the
context of the old process in its PCB and loads the saved
context of the new process scheduled to run.
Operations on Processes

The processes in most systems can execute


concurrently, and they may be created and deleted
dynamically. Thus, these systems must provide a
mechanism for process creation and termination.

Process Creation
A process may create several new processes, via a
create-process system call, during the course of
execution. The creating process is called a parent
process, and the new processes are called the children
of that process.
Each of these new processes may in turn create other
processes, forming a tree of processes.
Process Creation
 Most operating systems (including UNIX and the Windows
family of operating systems) identify processes according
to a unique process identifier (or pid), which is typically
an integer number.
 When a process creates a subprocess, that subprocess
may be able to obtain its resources directly from the
operating system, or it may be constrained to a subset of
the resources of the parent process.
 The parent may have to partition its resources among its
children, or it may be able to share some resources (such
as memory or files) among several of its children.
 When a process creates a new process, two possibilities
exist in terms of execution:
1. The parent continues to execute concurrently with its
children.
2. The parent waits until some or all of its children have
terminated.
Process Creation (Cont.)
 UNIX examples
 fork system call creates new process

 Process creation in Windows. Processes are created


in the Win32 API using the CreateProcessO function,
which is similar to fork () in that a parent creates a
new child process.
Process Termination
 A process terminates when it finishes executing its
final statement and asks the operating system to
delete it by using the exit () system call.
 At that point, the process may return a status value
(typically an integer) to its parent process (via the
wait() system call). All the resources of the process—
including physical and virtual memory, open files, and
I/O buffers—are deallocated by the operating system.
 Termination can occur in other circumstances as
well. A process can cause the termination of another
process via an appropriate system call (for example,
TerminateProcess() in Win32).
 Otherwise, users could kill each other's jobs.
 A parent may terminate the execution of one of its
children for a variety of reasons, such as these:
 The child has exceeded its usage of some of the
resources that it has been allocated. (To determine
whether this has occurred, the parent must have a
mechanism to inspect the state of its children.)
 The task assigned to the child is no longer required.
 The parent is exiting, and the operating system does
not allow a child to continue if its parent terminates.

 To illustrate process execution and termination, consider


that, in UNIX, we can terminate a process by using the
exit() system call; its parent process may wait for the
termination of a child process by using the wait() system
call.
Cooperating Processes

Processes executing concurrently in the operating system


may be either independent processes or cooperating
processes.
Independent process cannot affect or be affected by the
execution of another process
Cooperating process can affect or be affected by the
execution of another process. any process that shares data
with other processes is a cooperating process.
Advantages of process cooperation
 Information sharing - Since several users may be
interested in the same piece of information (for
instance, a shared file), we must provide an environment
to allow concurrent access to such information.
 Computation speed-up- If we want a particular task to
run faster, we must break it into subtasks, each of
which will be executing in parallel with the others.
Notice that such a speedup can be achieved only if the
computer has multiple processing elements
Cooperating Processes
 Modularity- We may want to construct the system
in a modular fashion, dividing the system functions
into separate processes or threads.
 Convenience-Even an individual user may work on
many tasks at the same time. For instance, a user
may be editing, printing, and compiling in parallel.
Interprocess Communication (IPC)

Cooperating processes require an interprocess


communication (IPC) mechanism that will allow them to
exchange data and information. There are two fundamental
models of interprocess communication:

 Shared memory

Message passing

In the shared-memory model, a region of memory that is


shared by cooperating processes is established. Processes
can then exchange information by reading and writing data
to the shared region.
In the message passing model, communication takes place
by means of messages exchanged between the cooperating
processes.
Communications Models
Shared-Memory Systems
 Interprocess communication using shared memory
requires communicating processes to establish a region
of shared memory.
 A shared-memory region resides in the address space of
the process creating the shared-memory segment.
 Other processes that wish to communicate using this
shared-memory segment must attach it to their address
space.
 Normally, the operating system tries to prevent one
process from accessing another process's memory.
Shared memory requires that two or more processes
agree to remove this restriction. They can then
exchange information by reading and writing data in the
shared areas.
Message-Passing Systems
 Message passing provides a mechanism to allow
processes to communicate and to synchronize their
actions without sharing the same address space.
 It is particularly useful in a distributed environment,
where the communicating processes may reside on
different computers connected by a network.
 For example, a chat program used on the World Wide
Web could be designed so that chat participants
communicate with one another by exchanging
messages.
 A message-passing facility provides at least two
operations: send(message) and receive(message).
 If processes P and Q want to communicate, they must
send messages to and receive messages from each
other; a communication link must exist between them.
 Here are several methods for logically implementing a
link and the send()/receive () operations:
 Direct or indirect communication
 Synchronous or asynchronous communication
 Buffering
Direct Communication
 Processes must name each other explicitly:
 send (P, message) – send a message to process P
 receive(Q, message) – receive a message from
process Q
 Properties of communication link
 Links are established automatically
 A link is associated with exactly one pair of
communicating processes
 Between each pair there exists exactly one link
 The link may be unidirectional, but is usually bi-
directional
Indirect Communication

 Messages are directed and received from


mailboxes
 Each mailbox has a unique id
 Processes can communicate only if they share a
mailbox
 Properties of communication link
 Link established only if processes share a
common mailbox
 A link may be associated with many processes
 Each pair of processes may share several
communication links
 Link may be unidirectional or bi-directional
Indirect Communication

 Operations
 create a new mailbox
 send and receive messages through mailbox
 destroy a mailbox
 Primitives are defined as:
send(A, message) – send a message to mailbox A
receive(A, message) – receive a message from
mailbox A
Synchronization

Communication between processes takes place through


calls to send() and receive () primitives. There are different
design options for implementing each primitive. Message
passing may be either blocking or nonblocking— also
known as synchronous and asynchronous.

Blocking send. The sending process is blocked until the


message is received by the receiving process or by the
mailbox.
Nonblocking send. The sending process sends the
message and resumes operation.
 Blocking receive. The receiver blocks until a message is
available.
 Nonblocking receive. The receiver retrieves either a valid
message or a null.
Buffering

Whether communication is direct or indirect, messages


exchanged by communicating processes reside in a temporary
queue. Basically, such queues can be implemented in three
ways:
Zero capacity - The queue has a maximum length of zero; thus,
the link cannot have any messages waiting in it. In this case,
the sender must block until the recipient receives the message.
 Bounded capacity- The queue has finite length n; thus, at most
n messages can reside in it. If the queue is not full when a new
message is sent, the message is placed in the queue and the
sender can continue execution without waiting. The links
capacity is finite, however. If the link is full, the sender must
block until space is available in the queue.
Unbounded capacity- The queues length is potentially infinite;
thus, any number of messages can wait in it. The sender never
blocks.

You might also like