Os 2

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 38

PROCESS

MANAGEMENT
 A process is a program in execution.

 A process will need certain resources — such as CPU time, memory, files, and I/O devices— to
accomplish its task. These resources are allocated to the process either when it is created or while
it is executing.

 A system therefore consists of a collection of processes: operating system processes executing


system code and user processes executing user code. Potentially, all these processes can execute
concurrently, with the CPU (or CPUs) multiplexed among them. By switching the CPU between
processes, the operating system can make the computer more productive.
 The operating system is responsible for several important aspects of process and thread management: the
creation and deletion of both user and system processes; the scheduling of processes; and the provision of
mechanisms for synchronization, communication, and deadlock handling for processes.

 A program is a passive entity, such as a file containing a list of instructions stored on disk (often called an
executable file). In contrast, a process is an active entity, with a program counter specifying the next
instruction to execute and a set of associated resources. A program becomes a process when an executable
file is loaded into memory.
 A program is a piece of code which may be a single line or millions of lines. A computer program is usually
written by a computer programmer in a programming language. For example, here is a simple program
written in C programming language –
THE STRUCTURE OF A PROCESS IN MEMORY
THE STRUCTURE OF A PROCESS IN MEMORY

 The process stack contains temporary data (such as function parameters, return addresses, and
local variables),

 The data section contains global variables.

 A process may also include a heap, which is memory that is dynamically allocated during process
run time.

 The text section contains the executable code of the program


PROCESS STATE

• The state of a process is defined in part by the current activity of that process.

• As a process executes, it changes state.

• A process may be in one of the following states:

 New: The process is being created.


 Running: Instructions are being executed.
 Waiting: The process is waiting for some event to occur (such as an I/O completion or reception of a signal).
 Ready: The process is waiting to be assigned to a processor.
 Terminated: The process has finished execution.
PROCESS STATE
PROCESS CONTROL BLOCK(PCB)

 The Process Control Block (PCB) is a data structure that is maintained by the operating system for each
active process in the system.

 Each process is represented in the operating system by a process control block (PCB) — also called a task
control block.

 The PCB is crucial for the operating system to manage and control the execution of processes efficiently.
During a context switch (when the operating system switches from executing one process to another), the
contents of the PCB are used to save the state of the currently running process and load the state of the next
process to be executed. This allows the operating system to seamlessly switch between processes while
maintaining the illusion of concurrent execution.
PROCESS CONTROL BLOCK(PCB)

 The PCB contains essential information about the process, such as its current state, program
counter, CPU registers, memory management information, and other relevant details. As the
process runs, the operating system updates the PCB to reflect any changes in the process's state or
execution context.
PROCESS CONTROL BLOCK(PCB)

A PCB is shown in Figure 3.3. It contains many pieces of information associated with a specific process,
including these:
PROCESS CONTROL BLOCK(PCB)

 Process state: The state may be new, ready, running, waiting, halted, and so on.

 Program counter: The counter indicates the address of the next instruction to be executed for this process.

 CPU Registers: Contents of various CPU registers, including general-purpose registers, status registers, and
other special-purpose registers. This information is crucial for saving and restoring the process's state during
context switches. Along with the program counter, this state information must be saved when an interrupt
occurs, to allow the process to be continued correctly afterward.

 CPU-scheduling information: This information includes a process priority, pointers to scheduling queues,
and any other scheduling parameters.
PROCESS CONTROL BLOCK(PCB)

 Memory-management information: This information may include such items as the value of the
base and limit registers and the page tables, or the segment tables, depending on the memory
system used by the operating system

 Accounting information: This information includes the amount of CPU and real time used, time
limits, account numbers, job or process numbers, and so on.

 I/O status information: This information includes the list of I/O devices allocated to the process,
a list of open files, and so on
CONTEXT SWITCH

 A context switch is the process by which the operating system saves the state of a currently running process
and restores the state of another process to allow for multitasking or time-sharing.

1. Current Process Execution:


 CPU executing instructions for a particular process. This includes the program counter, CPU registers, and
other relevant information.

2. Interrupt or Preemption:
 An event occurs that triggers a need for a context switch. This could be an interrupt (e.g., a timer interrupt
indicating that the current process's time slice is up) or a higher-priority process becoming ready to run.

3. Saving the Current Process State:


 The operating system saves the state of the currently executing process, including the contents of registers,
program counter, and other relevant information. This is typically stored in the Process Control Block (PCB)
associated with the current process.
CONTEXT SWITCH

4. Selecting a New Process:


 The operating system selects a new process to run. This could involve choosing the next process in a ready
queue based on a scheduling algorithm.

5. Restoring the New Process State:


 The operating system restores the saved state of the newly selected process. This includes updating the
program counter and loading the contents of registers with the saved values from the PCB.

6. Resuming Execution:
 The CPU resumes execution of instructions for the newly selected process from the point where it was
interrupted.

 This process repeats as the operating system manages the execution of multiple processes on the CPU.
CONTEXT SWITCH

The diagram illustrates the flow of control and data during a context switch, emphasizing the efficient transition
from one process to another.
THREADS

 A thread refers to a single sequential flow of activities being executed in a process

 It is also known as the thread of execution or the thread of control

 Threads are also called lightweight processes as they possess some of the properties of processes.
Each thread belongs to exactly one process. In an operating system that supports multithreading,
the process can consist of many threads.

 Threads run in parallel improving the application performance. Each such thread has its own CPU
state and stack, but they share the address space of the process and the environment.
THREADS

 Single thread of control allows the process to perform only one task at a time. for example in word processing
the user cannot simultaneously type in characters and run the spell checker within the same process.

 If a process has multiple threads of control, it can perform more than one task at a time. Figure 4.1 illustrates
the difference between a traditional single-threaded process and a multithreaded process.
THREADS

 The idea is to achieve parallelism by dividing a process into multiple threads. For example, in a
browser, multiple tabs can be different threads. MS Word uses multiple threads: one thread to
format the text, another thread to process inputs, etc.

 Multithreading is a technique used in operating systems to improve the performance and


responsiveness of computer systems. Multithreading allows multiple threads (i.e., lightweight
processes) to share the same resources of a single process, such as the CPU, memory, and I/O
devices.

 The primary difference is that threads within the same process run in a shared memory space,
while processes run in separate memory spaces. Threads are not independent of one another like
processes are, and as a result, threads share with other threads their code section, data section, and
OS resources (like open files and signals). But, like a process, a thread has its own program
counter (PC), register set, and stack space.
PROCESS SCHEDULING

 The objective of multiprogramming is to have some process running at all times, to maximize
CPU utilization. The objective of time sharing is to switch the CPU among processes so frequently
that users can interact with each program while it is running.

 To meet these objectives, the process scheduler selects an available process (possibly from a set
of several available processes) for program execution on the CPU.
SCHEDULING QUEUES

 As processes enter the system, they are put into a job queue, which consists of all processes in the system.

 The processes that are residing in main memory and are ready and waiting to execute are kept on a list called
the ready queue.

 When a process is allocated the CPU, it executes for a while and eventually quits, is interrupted, or waits for
the occurrence of a particular event, such as the completion of an I/O request. Suppose the process makes an
I/O request to a shared device, such as a disk. Since there are many processes in the system, the disk may be
busy with the I/O request of some other process. The process therefore may have to wait for the disk. The list
of processes waiting for a particular I/O device is called a device queue. Each device has its own device
queue.
SCHEDULING QUEUES
SCHEDULING QUEUES

 A common representation of process scheduling is a queueing diagram, such as that in Figure 3.6. Two types
of queues are present: the ready queue and a set of device queues. The circles represent the resources that
serve the queues, and the arrows indicate the flow of processes in the system.

 A new process is initially put in the ready queue. It waits there until it is selected for execution, or dispatched.
Once the process is allocated the CPU and is executing, one of several events could occur:
• The process could issue an I/O request and then be placed in an I/O queue.
• The process could create a new child process and wait for the child’s termination.
• The process could be removed forcibly from the CPU, as a result of an interrupt, and be put back in the ready
queue.

 In the first two cases, the process eventually switches from the waiting state to the ready state and is then put
back in the ready queue. A process continues this cycle until it terminates, at which time it is removed from
all queues and has its PCB and resources deallocated.
SCHEDULERS

 A process migrates among the various scheduling queues throughout its lifetime. The operating system must
select, for scheduling purposes, processes from these queues in some fashion. The selection process is carried
out by the appropriate scheduler.

 Schedulers in an operating system are responsible for determining the order in which processes or threads are
executed on the CPU.

 The primary goal of schedulers is to manage the allocation of the CPU's processing time among competing
processes or threads to optimize system performance, ensure fairness, and provide responsive user
experiences.
SCHEDULERS

There are different levels of schedulers in an operating system, including:

 The long-term scheduler, or job scheduler, selects processes from the pool of processes residing in the job
pool (secondary storage) and loads them into memory for execution. This scheduler determines which
processes are admitted to the ready queue and brought into the main memory for execution.

 The short-term scheduler, or CPU scheduler, selects from among the processes that are ready to execute and
allocates the CPU to one of them. The short-term scheduler selects which process from the ready queue will
execute next on the CPU.

 The medium-term scheduler; The key idea behind a medium-term scheduler is that sometimes it can be
advantageous to remove a process from memory (and from active contention for the CPU) and thus reduce
the degree of multiprogramming. Later, the process can be reintroduced into memory, and its execution can
be continued where it left off. This scheme is called swapping.
SCHEDULERS

 Swapping may be necessary to improve the process mix or because a change in memory requirements has
overcommitted available memory, requiring memory to be freed up.
SCHEDULERS

 It is important that the long-term scheduler make a careful selection. In general, most processes
can be described as either I/O bound or CPU bound.

 An I/O-bound process is one that spends more of its time doing I/O than it spends doing
computations.

 A CPU-bound process, in contrast, generates I/O requests infrequently, using more of its time
doing computations.
SCHEDULERS

 It is important that the long-term scheduler select a good process mix of I/O-bound and CPU-
bound processes.

 If all processes are I/O bound, the ready queue will almost always be empty, and the short-term
scheduler will have little to do. If all processes are CPU bound, the I/O waiting queue will almost
always be empty, devices will go unused, and again the system will be unbalanced.

 The system with the best performance will thus have a combination of CPU-bound and I/O-bound
processes.
SCHEDULERS

 Schedulers operate in coordination to manage the execution of processes in a way that maximizes
overall system efficiency. The goal is to minimize wait times, enhance system throughput, and
provide a fair allocation of resources to different processes

 The division into three levels of scheduling allows for a more organized and efficient approach to
managing processes, from deciding which processes to admit into the system (Long-Term
Scheduler) to handling the movement of processes in and out of memory (Medium-Term
Scheduler) and making quick decisions about CPU execution (Short-Term Scheduler). The
coordination between these levels helps in achieving the goals of effective multitasking,
responsiveness, and resource optimization.
OPERATION ON PROCESSES

 The processes in most systems can execute concurrently, and they may be created and deleted dynamically.
Thus, these systems must provide a mechanism for process creation and termination.

PROCESS CREATION
During the course of execution, a process may create several new processes. The creating process is called a
parent process, and the new processes are called the children of that process. Each of these new processes may
in turn create other processes, forming a tree of processes.

 Most operating systems (including UNIX, Linux, and Windows) identify processes according to a unique
process identifier (or pid), which is typically an integer number. The pid provides a unique value for each
process in the system, and it can be used as an index to access various attributes of a process within the
kernel.
OPERATION ON PROCESSES
OPERATION ON PROCESSES

 In general, when a process creates a child process, that child process will need certain resources
(CPU time, memory, files, I/O devices) to accomplish its task.

 A child process may be able to obtain its resources directly from the operating system, or it may be
constrained to a subset of the resources of the parent process.

 The parent may have to partition its resources among its children, or it may be able to share some
resources (such as memory or files) among several of its children. Restricting a child process to a
subset of the parent’s resources prevents any process from overloading the system by creating too
many child processes.
OPERATION ON PROCESSES

When a process creates a new process, two possibilities for execution exist:
• The parent continues to execute concurrently with its children.
• The parent waits until some or all of its children have terminated.

PROCESS TERMNATION
 A process terminates when it finishes executing its final statement and asks the operating system to delete it
by using the exit() system call.

 At that point, the process may return a status value (typically an integer) to its parent process (via the wait()
system call). All the resources of the process — including physical and virtual memory, open files, and I/O
buffers — are deallocated by the operating system
OPERATION ON PROCESSES

 Termination can occur in other circumstances as well. A process can cause the termination of another process
via an appropriate system call (for example, TerminateProcess() in Windows). Usually, such a system call can
be invoked only by the parent of the process that is to be terminated. Otherwise, users could arbitrarily kill
each other’s jobs.
 Note that a parent needs to know the identities of its children if it is to terminate them. Thus, when one
process creates a new process, the identity of the newly created process is passed to the parent.

A parent may terminate the execution of one of its children for a variety of reasons, such as these:
• The child has exceeded its usage of some of the resources that it has been allocated. (To determine whether
this has occurred, the parent must have a mechanism to inspect the state of its children.)
• The task assigned to the child is no longer required.
• The parent is exiting, and the operating system does not allow a child to continue if its parent terminates.
INTERPROCESS COMMUNICATION

 Processes executing concurrently in the operating system may be either independent processes or
cooperating processes.

 A process is independent if it cannot affect or be affected by the other processes executing in the
system. Any process that does not share data with any other process is independent.

 A process is cooperating if it can affect or be affected by the other processes executing in the
system. Clearly, any process that shares data with other processes is a cooperating process.
INTERPROCESS COMMUNICATION

There are several reasons for providing an environment that allows process cooperation:

• Information sharing: Since several users may be interested in the same piece of information (for instance, a
shared file), we must provide an environment to allow concurrent access to such information.

• Computation speedup: If we want a particular task to run faster, we must break it into subtasks, each of
which will be executing in parallel with the others. Notice that such a speedup can be achieved only if the
computer has multiple processing cores.

• Modularity: We may want to construct the system in a modular fashion, dividing the system functions into
separate processes or threads.

• Convenience: Even an individual user may work on many tasks at the same time. For instance, a user may be
editing, listening to music, and compiling in parallel.
INTERPROCESS COMMUNICATION

Cooperating processes require an interprocess communication (IPC) mechanism that will allow them to
exchange data and information. There are two fundamental models of interprocess communication: shared
memory and message passing.
INTERPROCESS COMMUNICATION

 In the shared-memory model, a region of memory that is shared by cooperating processes is


established. Processes can then exchange information by reading and writing data to the shared
region.

 In the message-passing model, communication takes place by means of messages exchanged


between the cooperating processes. Messages can be sent and received through various
mechanisms, including direct function calls, sockets, or message queues. Message passing
provides a more loosely coupled form of communication compared to shared memory, as
processes don't directly access each other's memory.

You might also like