Operating Systems Processes Lec3
Operating Systems Processes Lec3
E-OGODO (MRS)
What is a Process?
Process Components
Process States
Process Control Block (PCB)
Process Scheduling Algorithms
An operating system process is an instance of a
computer program that is being executed by the
operating system (OS).
It is the basic unit of execution in a computer system
managed by the operating system. Each process
represents a single task or program running on a
computer.
A process is basically a program in execution. The
execution of a process must progress in a sequential
fashion.
For example, when we write a program in C or C++
and compile it, the compiler creates binary code. The
original code and binary code are both programs.
When we actually run the binary code, it becomes a
process.
A single program can create many processes
when run multiple times; for example, when we
open a .exe or binary file multiple times, multiple
instances begin (multiple processes are created).
To put it in simple terms, we write our computer
programs in a text file and when we execute this
program, it becomes a process which performs
all the tasks mentioned in the program.
When a program is loaded into the memory and it
becomes a process, it can be divided into four
sections ─ stack, heap, text and data.
Stack: The process Stack
contains the temporary data
such as method/function
parameters, return address
and local variables.
Heap: This is dynamically
allocated memory to a
process during its run time.
Text: This includes the
current activity represented
by the value of Program
Counter and the contents of
the processor's registers.
Data: This section contains
the global and static
variables.
When a process executes, it passes through
different states. These stages may differ in
different operating systems, and the names
of these states are also not standardized.
In general, a process can have one of the
following five states at a time. They include:
Start/New
Ready
Running
Waiting/Suspended
Terminated or Exited
New/Start: This is the initial state when a process is being created. The
operating system is preparing the process to be ready for execution,
allocating necessary resources like memory, and setting up initial
parameters.
Ready: In this state, the process is ready to execute and waiting for the
CPU to be assigned for execution. It has all the necessary resources but
is waiting in the queue for its turn to be executed.
Running: The process is currently being executed on the CPU. Only one
process can be in the running state on a single processor system at any
given time. On a multi-core/multi-processor system, multiple processes
can be in the running state concurrently.
Blocked (or Waiting): A process transitions to this state when it's waiting
for a particular event or resource, such as user input, I/O operation
completion, or an external signal. While in this state, the process is not
using the CPU and remains in a blocked state until the event it's waiting
for occurs.
Terminated (or Exit): When a process finishes its execution or is
explicitly terminated by the operating system or by itself, it enters this
state. Resources allocated to the process are released, and it is removed
from the system's process table.
Additionally, some operating systems might have
variations or additional states based on their design
or specific functionalities.
For example, some systems may have an "Suspended"
state where a process is temporarily inactive but not
fully terminated, allowing it to be resumed later
without loss of its current state.
The transitions between these states are managed by
the operating system's scheduler and various system
calls, ensuring efficient utilization of system
resources and proper execution of processes.
Processes can move between these states based on
events like interrupts, I/O completion, resource
availability, or process scheduling decisions made by
the OS.
A Process Control Block is a data structure
maintained by the Operating System for every
process.
The PCB is identified by an integer process ID
(PID). A PCB keeps all the information needed to
keep track of a process.
A Process Control Block (PCB) is a data structure
used by operating systems to store and manage
information about a process in a system.
Each process in an operating system has an
associated PCB, which contains various pieces of
information needed by the operating system to
manage the process effectively. PCBs are created
and maintained by the operating system kernel.
Process State: The current state of the process i.e.,
whether it is ready, running, waiting, or whatever.
Process privileges: This is required to allow/disallow
access to system resources.
Process ID: Unique identification for each of the
process in the operating system.
Pointer: A pointer to parent process.
Program Counter: Program Counter is a pointer to the
address of the next instruction to be executed for
this process.
CPU registers: Various CPU registers where process
need to be stored for execution for running state.
CPU Scheduling Information: Process priority and
other scheduling information which is required to
schedule the process.
Memory management information: This includes
the information of page table, memory limits,
Segment table depending on memory used by the
operating system.
Accounting information: This includes the
amount of CPU used for process execution, time
limits, execution ID etc.
IO status information
This includes a list of I/O devices allocated to the
process.
The architecture of a
PCB is completely
dependent on
Operating System and
may contain different
information in different
operating systems.
The PCB is maintained
for a process
throughout its lifetime,
and is deleted once the
process terminates.
Process management refers to the activities involved
in managing the execution of multiple processes in
an operating system. It includes creating, scheduling,
and terminating processes, as well as allocating
system resources such as CPU time, memory, and I/O
devices.
The operating system manages processes by carrying
out any of the following activities.
Scheduling processes and threads on the CPUs.
Creating and deleting both user and system
processes.
Suspending and resuming processes.
Providing mechanisms for process synchronization.
Providing mechanisms for process communication.
Process scheduling algorithms are used by operating systems
to determine the order in which processes are executed on
the CPU. These algorithms aim to optimize CPU utilization,
increase system throughput, minimize waiting times, and
ensure fairness among processes.
Each scheduling algorithm has its trade-offs and is suitable
for different scenarios or system requirements. OS may use a
combination of these algorithms or variations to achieve
better performance. Some common algorithms include:
First-Come, First-Served (FCFS)
Shortest Job Next (SJN) or Shortest Job First (SJF) Scheduling
Priority Scheduling
Round Robin (RR) Scheduling
Multilevel Queue Scheduling
Burst Time: This is the total time taken by the process for its execution
on the CPU.
Arrival Time: This is the time when a process enters into the ready state
and is ready for its execution.
Response time: This is the time spent when the process is in the ready
state and gets the CPU for the first time.
Waiting time: This is the total time spent by the process in the ready
state waiting for CPU.
Turnaround time: This is the total amount of time spent by the process
from coming in the ready state for the first time to its completion.
Turnaround time = Burst time + Waiting time
or
Turnaround time = Exit time - Arrival time
Throughput: This is a way to find the efficiency of a CPU. It can be
defined as the number of processes executed by the CPU in a given
amount of time. For example, let's say, the process P1 takes 3 seconds
for execution, P2 takes 5 seconds, and P3 takes 10 seconds. So,
throughput, in this case, the throughput will be (3+5+10)/3 = 18/3 = 6
seconds.
Processes are executed in the order they arrive in
the ready queue. It's a non-preemptive algorithm
where the CPU is allocated to the first process in
the queue until it completes its execution.
Simple to implement but may lead to poor average
waiting times, especially for long-running
processes, known as the "convoy effect."
Selects the process with the shortest execution
time next. This can minimize average waiting time
since shorter processes are executed first.
Requires knowledge of the execution time of each
process, which may not be known in advance in
practical scenarios.
Assigns priorities to processes and selects
the highest priority process for execution.
Can be either preemptive (priority of the
running process can be changed) or non-
preemptive (once a process is in the running
state, its priority cannot change until it
completes or moves to the ready state).
Allocates a fixed time slice (quantum) to each
process in a cyclic manner. If a process
doesn't finish within its time slice, it's placed
back in the ready queue, and the CPU is
assigned to the next process in line.
Provides fair allocation of CPU time but may
lead to higher context switch overhead with
smaller time slices and longer waiting times
with larger time slices.
Divides the ready queue into multiple queues
based on process characteristics (e.g.,
priority, process type). Each queue might
have its own scheduling algorithm.
Processes are dispatched based on their
queue and priority level.
A process is basically a program in execution.
The execution of a process must progress in a
sequential fashion.
In general, a process can have one of the
following five states at a time.
A Process Control Block is a data structure
maintained by the Operating System for every
process.
Each scheduling algorithm has its trade-offs and
is suitable for different scenarios or system
requirements. Operating systems may use a
combination of these algorithms or variations to
achieve better overall system performance.