Os Unit 2
Os Unit 2
Os Unit 2
02/08/2022
Roadmap ( PROCESS MANAGEMENT )
• Scheduling:
Preemptive and non-preemptive scheduling; scheduling policies
• Concurrency:
• Mutual exclusion
• deadlock detection and prevention; solution strategies;
models and mechanisms
• (semaphores, monitors, condition variables, rendezvous);
producer-consumer problems; synchronization;
multiprocessor issues
Process Concept
• An operating system executes a variety of
programs:
— Batch system – jobs
— Time-shared systems – user programs or tasks
• uses the terms job and process almost
interchangeably.
• Process – a program in execution; process
execution must progress in sequential fashion.
• A process includes:
— program counter
— stack
— data section
What is a “process” ?
• A program in execution
• An instance of a program running on a
computer
• The entity that can be assigned to and
executed on a processor
• A unit of activity characterized by the
execution of a sequence of instructions, a
current state, and an associated set of
system instructions.
Process State
• As a process executes, it changes state
—new: The process is being created.
—running: Instructions are being executed.
—waiting: The process is waiting for some
event to occur.
—ready: The process is waiting to be assigned
to a process.
—terminated: The process has finished
execution.
Diagram of Process State
•
Two-State Process Model
• Process may be in one of two states
— Running
— Not-running
Queuing Diagram
•
• Shared data
#define BUFFER_SIZE 10
Typedef struct {
...
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
• Solution is correct, but can only use
BUFFER_SIZE-1 elements
Bounded-Buffer – Producer Process
item nextProduced;
while (1) {
while (((in + 1) % BUFFER_SIZE) ==
out) //Buffer full
; /* do nothing */
buffer[in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
}
Bounded-Buffer – Consumer Process
item nextConsumed;
while (1) {
while (in == out) // Empty
; /* do nothing */
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
}
Interprocess Communication (IPC)
• Mechanism for processes to communicate and to synchronize
their actions.
• Message system – processes communicate with each other
without resorting to shared variables.
• IPC facility provides two operations:
—send(message) – message size fixed or variable
—receive(message)
• If P and Q wish to communicate, they need to:
—establish a communication link between them
—exchange messages via send/receive
• Implementation of communication link
—physical (e.g., shared memory, hardware bus)
—logical (e.g., logical properties)
Implementation Questions
• How are links established?
• Can a link be associated with more than
two processes?
• How many links can there be between
every pair of communicating processes?
• What is the capacity of a link?
• Is the size of a message that the link can
accommodate fixed or variable?
• Is a link unidirectional or bi-directional?
Addressing
• Operations
—create a new mailbox
—send and receive messages through
mailbox
—destroy a mailbox
• Primitives are defined as:
send(A, message) – send a message
to mailbox A
receive(A, message) – receive a
message from mailbox A
Indirect Communication
• Mailbox sharing
—P1, P2, and P3 share mailbox A.
—P1, sends; P2 and P3 receive.
—Who gets the message?
• Solutions
—Allow a link to be associated with at most two
processes.
—Allow only one process at a time to execute a
receive operation.
—Allow the system to select arbitrarily the
receiver ( either p2 or p3 ). Sender is notified
who the receiver was.
Indirect Process Communication
General Message Format
Synchronization
• Message passing may be either blocking
or non-blocking.
• Blocking is considered synchronous
• Non-blocking is considered
asynchronous
• send and receive primitives may be
either blocking or non-blocking.
Synchronization conti…
• Blocking Send: the sending process is
blocked until the message is received by
the receiving process or by mailbox.
• Non blocking Send: the sending process
sends the message and resumes the
operation.
• Blocking receive: the receiver blocks until
a message is available.
• Non blocking receive: the receiver
retrieves either a valid message or Null.
Buffering
• Queue of messages attached to the link;
implemented in one of three ways.
1. Zero capacity – 0 messages
Sender must wait for receiver (rendezvous).
2. Bounded capacity – finite length of n
messages
Sender must wait if link full.
3. Unbounded capacity – infinite length
Sender never waits.
Client-Server Communication
• Sockets
• Remote Procedure Calls
• Remote Method Invocation (Java)
Sockets
• A socket is defined as an endpoint for
communication.
• Concatenation of IP address and port
• The socket 161.25.19.8:1625 refers to
port 1625 on host 161.25.19.8
• Communication consists between a pair of
sockets.
Socket Communication
Processes and Threads
• The unit of dispatching is referred to as a
thread or lightweight process
• The unit of resource ownership is referred
to as a process or task
•
Multithreading
• The ability of an
OS to support
multiple,
concurrent paths
of execution within
a single process.
Single Thread
Approaches
• MS-DOS supports a
single user process
and a single thread.
• Some UNIX, support
multiple user
processes but only
support one thread
per process
Multithreading
• Java run-time
environment is a
single process with
multiple threads
• Multiple processes
and threads are found
in Windows, Solaris,
and many modern
versions of UNIX
One or More Threads in Process
• Resource Sharing
• Economy
• Utilization of MP Architectures
User levelThreads
• Thread management done by user-level
threads library.
• Library supports thread creation and
destroying ,scheduling and Management
with no support from kernel.
• User level thread are fast to create and
manage .
• Examples
- POSIX Pthreads
- Mach C-threads
- Solaris threads
User-Level Threads
• All thread
management is done
by the application
• The kernel is not
aware of the
existence of threads
Kernel Level Threads ( OS )
• Supported by the Kernel.
• slower to create and manage.
• Examples
- Windows 95/98/NT/2000
- Solaris
- Tru64 UNIX
- BeOS
- Linux
Kernel-Level Threads
• Example is Solaris
Multithreading Models
• Many-to-One
• One-to-One
• Many-to-Many
Many-to-One
• Many user-level threads mapped to single
kernel thread.
Thread 2 makes
requests to server
Receipt & Input-output
queuing
Thread 1
generates
results
T1
Requests
N threads
Client
Server
Threading Issues
• Semantics of fork() and exec() system
calls.
• Thread cancellation.
• Signal handling
• Thread pools
• Thread specific data
Pthreads
• a POSIX standard (IEEE 1003.1c) API for
thread creation and synchronization.
• API specifies behavior of the thread
library, implementation is up to
development of the library.
• Common in UNIX operating systems.
Solaris 2 Threads
Solaris Process
Windows 2000 Threads
• Implements the one-to-one mapping.
• Each thread contains
- a thread id
- register set
- separate user and kernel stacks
- private data storage area
Linux Threads
• Linux refers to them as tasks rather than
threads.
• Thread creation is done through clone()
system call.
• Clone() allows a child task to share the
address space of the parent task (process)
Java Threads
• Java threads may be created by:
ProcessBurst Time
P1 24
P2 3
P3 3
• Suppose that the processes arrive in
the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P1 P2 P3
0 24 27 30
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the
order
P2 , P3 , P 1 .
• The Gantt chart for the schedule is:
P2 P3 P1
0 3 6 30
• Waiting time for P1 = 6; P2 = 0; P3 = 3
• Average waiting time: (6 + 0 + 3)/3 = 3
• Much better than previous case.
• Convoy effect short process behind long
process
EXAMPLE - 1
0 P3 8
0 P4 3
P1 P2 P3 P4
0 5 15 23 26
Gantt Chart
Average Waiting Time:
P1 => 0 – 0 = 0
P2 => 5 – 0 = 5
P3 => 15 – 0 = 15
P4 => 23 – 0 = 23
P1 = 5 – 0 = 5
P2 = 15 – 0 = 15
P3 = 23 – 0 = 23
P4 = 26 – 0 = 26
P1 => 0 – 0 = 0
P2 => 5 – 0 = 5
P3 => 15 – 0 = 15
P4 => 23 – 0 = 23
Short job have to wait for long time, when the CPU is allocate to long
jobs
EXAMPLE - 2
0 P1 3
2 P2 6
4 P3 4
6 P4 5
8 P5 2
P1 P2 P3 P4 P5
0 3 9 13 18 20
Gantt Chart
Shortest-Job-First (SJR) Scheduling
• Associate with each process the length of its next
CPU burst. Use these lengths to schedule the process
with the shortest time.
• Two schemes:
—nonpreemptive – once CPU given to the process it cannot
be preempted until completes its CPU burst.
—preemptive – if a new process arrives with CPU burst
length less than remaining time of current executing
process, preempt. This scheme is know as the
Shortest-Remaining-Time-First (SRTF).
• SJF is optimal – gives minimum average waiting
time for a given set of processes.
EXAMPLE - 1
0 P1 5
0 P2 10
0 P3 8
0 P4 3 Shortest CPU time
P4 P1 P3 P2
0 3 8 16 26
Gantt Chart
Average Waiting Time:
P1 => 0 – 0 = 0
P2 => 3 – 0 = 3
P3 => 8 – 0 = 8
P4 => 16 – 0 = 16
P1 = 3 – 0 = 3
P2 = 8 – 0 = 8
P3 = 16 – 0 = 16
P4 = 26 – 0 = 26
Pros:
Cons:
0 2 4 5 7 8 12 16
P1‘s wating time = 0
P1(7)
P2(4) P2‘s wating time = 6
P3‘s wating time = 3
P3(1)
P4‘s wating time = 7
P4(4)
Shortest Remaining Time First [SRTF]
Scheduler compare the remaining time of executing process & new process
Scheduler always selects the process that has Shortest Remaining Time
EXAMPLE 1: (preemptive)
1 0 8
2 1 4
3 2 9
4 3 5
Example 1 Conti….
•
P1 P2 P3 P2 P4 P1
0 2 4 5 7 11 16
3
0 P1
6
2 P2
4
4 P3
5
6 P4
2
8 P5
P1 P2 P3 P5 P2 P4
0 3 4 8 10 15 20
Gantt Chart
Determining Length of Next CPU Burst
n1 tn 1 n .
Prediction of the Length of the Next CPU Burst
Examples of Exponential Averaging
• =0
—n+1 = n
—Recent history does not count.
• =1
— n+1 = tn
—Only the actual last CPU burst counts.
• If we expand the formula, we get:
n+1 = tn+(1 - ) tn -1 + …
+(1 - )j tn -1 + …
+(1 - )n=1 tn 0
• Since both and (1 - ) are less than or
equal to 1, each successive term has less
weight than its predecessor.
Priority Scheduling
• A priority number (integer) is associated with each process
• The CPU is allocated to the process with the highest priority
(smallest integer highest priority).
—Preemptive
—nonpreemptive
• SJF is a priority scheduling where priority is the predicted
next CPU burst time.
• Problem Starvation – low priority processes may never
execute.
• Solution Aging – as time progresses increase the priority of
the process.
Priority Scheduling
Scheduler always picks up the highest priority process for execution from
Ready Queue
Priority scheduling can be either :
3 P1 5
2 P2 10
4 P3 8
1 P4 3
P4 P2 P1 P3
0 3 13 18 26
Gantt Chart
Priority Scheduling
Round Robin (RR)
• Each process gets a small unit of CPU time (time quantum),
usually 10-100 milliseconds. After this time has elapsed, the
process is preempted and added to the end of the ready queue.
• If there are n processes in the ready queue and the time
quantum is q, then each process gets 1/n of the CPU time in
chunks of at most q time units at once. No process waits more
than (n-1)q time units.
• Performance
—q large FIFO
—q small q must be large with respect to context switch,
otherwise overhead is too high.
EXAMPLE 1
P1
24
P2
3
P3
3
Time Quantum = 4 ms
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
Gantt Chart
Example 2:
Service
Process Arrival Time
• Time
1 0 3
2 2 6
3 4 4
4 6 5
5 8 2
TAT=CT-AT; WT=TAT-BT
•
Example 3 of RR with Time Quantum = 20
P1 P2 P3 P4 P1 P3 P4 P1 P3 P3
• Shared data
#define BUFFER_SIZE 10
typedef struct {
...
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
int counter = 0;
Bounded-Buffer
• Producer process
item nextProduced;
while (1) {
while (counter == BUFFER_SIZE)
; /* do nothing */
buffer[in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
counter++;
}
Bounded-Buffer
• Consumer process
item nextConsumed;
while (1) {
while (counter == 0)
; /* do nothing */
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;
}
Bounded Buffer
• The statements
counter++;
counter--;
• Shared variables
—boolean flag[2];
initially flag [0] = flag [1] = false.
—flag [i] = true Pi ready to enter its critical section
• Process Pi
do {
flag[i] := true;
while (flag[j]) ; critical
section
flag [i] = false;
remainder section
} while (1);
• Satisfies mutual exclusion, but not progress requirement.
Algorithm 3 (Peterson’s)
Applications:
Bakeries, Ice-cream stores, Deli counters and Motor-Registries
etc:
Bakery Algorithm
• Notation < lexicographical order (ticket #, process id #)
— (a,b) < c,d) if a < c or if a = c and b < d
— max (a0,…, an-1) is a number, k, such that k ai for i - 0,
…, n – 1
• Shared data
boolean choosing[n];
int number[n];
Data structures are initialized to false and 0 respectively
Bakery Algorithm
do {
choosing[i] = true;
number[i] = max(number[0], number[1], …, number [n – 1])
+1;
choosing[i] = false;
for (j = 0; j < n; j++) {
while (choosing[j]) ;
while ((number[j] != 0) && (number[j,j] <
number[i,i])) ;
}
critical section
number[i] = 0;
remainder section
} while (1);
Synchronization Hardware
• Test and modify the content of a word atomically
.
boolean TestAndSet(boolean &target) {
boolean rv = target;
target = true;
return rv;
}
Mutual Exclusion with Test-and-Set
• Shared data:
boolean lock = false;
• Process Pi
do {
while (TestAndSet(lock)) ;
critical section
lock = false;
remainder section
}
Synchronization Hardware
• Atomically swap two variables.
• Process Pi
do {
key = true;
while (key == true)
Swap(lock,key);
critical section
lock = false;
remainder section
}
Hardware Mutual
Exclusion: Advantages
V: signal (S):
S++;
Semaphore
• Semaphore:
— An integer value used for signalling among processes.
• Only three operations may be performed on a semaphore, all
of which are atomic:
— initialize,
— Decrement (semWait)
— increment. (semSignal)
Critical Section of n Processes
• Shared data:
semaphore mutex; //initially mutex = 1
• Process Pi:
do {
wait(mutex); //wait (S):
//P:while S 0 do no-op //
S--;
critical section
signal(mutex);
V: signal (S):
//S++;
remainder section
} while (1
signal(S):
S.value++;
if (S.value <= 0) {
remove a process P from S.L;
wakeup(P);
}
Semaphore as a General Synchronization Tool
• Data structures:
binary-semaphore S1, S2;
int C:
• Initialization:
S1 = 1
S2 = 0
C = initial value of semaphore S
Implementing S
• wait operation
wait(S1);
C--;
if (C < 0) {
signal(S1);
wait(S2);
}
signal(S1);
• signal operation
wait(S1);
C ++;
if (C <= 0)
signal(S2);
else
signal(S1);
Classical Problems of Synchronization
• Bounded-Buffer Problem
• Dining-Philosophers Problem
Bounded-Buffer Problem
• Shared data
Initially:
do {
…
produce an item in nextp
…
wait(empty);
wait(mutex);
…
add nextp to buffer
…
signal(mutex);
signal(full);
} while (1);
Bounded-Buffer Problem Consumer Process
do {
wait(full)
wait(mutex);
…
remove an item from buffer to nextc
…
signal(mutex);
signal(empty);
…
consume the item in nextc
…
} while (1);
Readers-Writers Problem
• S hared data
Initially
wait(mutex);
readcount++;
if (readcount == 1)
wait(wrt);
signal(mutex);
…
reading is performed
…
wait(mutex);
readcount--;
if (readcount == 0)
signal(wrt);
signal(mutex):
Dining-Philosophers Problem
• Shared data
semaphore chopstick[5];
Initially all values are 1
Dining Philosopher’s Problem (Dijkstra ’71)
Dining Philosophers…
• Philosophers eat/think
• Eating needs 2 forks
• Pick one fork at a time
• How to prevent deadlock
Dining-Philosophers Problem
• Philosopher i:
do {
wait(chopstick[i])
wait(chopstick[(i+1) % 5])
…
eat
…
signal(chopstick[i]);
signal(chopstick[(i+1) % 5]);
…
think
…
} while (1);
Dining Philosophers
struct buffer {
int pool[n];
int count, in, out;
}
Bounded Buffer Producer Process
• Producer process inserts nextp into the shared
buffer
monitor monitor-name
{
shared variable declarations
procedure body P1 (…) {
...
}
procedure body P2 (…) {
...
}
procedure body Pn (…) {
...
}
{
initialization code
}
}
Monitors
• To allow a process to wait within the monitor, a condition
variable must be declared, as
condition x, y;
• Condition variable can only be used with the operations wait and
signal.
— The operation
x.wait();
means that the process invoking this operation is suspended until
another process invokes
x.signal();
— The x.signal operation resumes exactly one suspended process. If
no process is suspended, then the signal operation has no effect.
Schematic View of a Monitor
Monitor With Condition Variables
Dining Philosophers Example
monitor dp
{
enum {thinking, hungry, eating} state[5];
condition self[5];
void pickup(int i) // following slides
void putdown(int i) // following slides
void test(int i) // following slides
void init() {
for (int i = 0; i < 5; i++)
state[i] = thinking;
}
}
Dining Philosophers
void pickup(int i) {
state[i] = hungry;
test[i];
if (state[i] != eating)
self[i].wait();
}
void putdown(int i) {
state[i] = thinking;
// test left and right neighbors
test((i+4) % 5);
test((i+1) % 5);
}
Dining Philosophers
void test(int i) {
if ( (state[(I + 4) % 5] != eating) &&
(state[i] == hungry) &&
(state[(i + 1) % 5] != eating)) {
state[i] = eating;
self[i].signal();
}
}
Monitor Implementation Using Semaphores
• Variables
semaphore mutex; // (initially =
1)
semaphore next; // (initially =
0)
int next-count = 0;
x-count++;
if (next-count > 0)
signal(next);
else
signal(mutex);
wait(x-sem);
x-count--;
Monitor Implementation
• The operation x.signal can be implemented as:
if (x-count > 0) {
next-count++;
signal(x-sem);
wait(next);
next-count--;
}
Monitor Implementation
• Conditional-wait construct: x.wait(c);
— c – integer expression evaluated when the wait
operation is executed.
— value of c (a priority number) stored with the
name of the process that is suspended.
— when x.signal is executed, process with smallest
associated priority number is resumed next.
• Check two conditions to establish
correctness of system:
— User processes must always make their calls on
the monitor in a correct sequence.
— Must ensure that an uncooperative process does not
ignore the mutual-exclusion gateway provided by the
monitor, and try to access the shared resource directly,
without using the access protocols.
Deadlocks
• System Model
• Deadlock Characterization
• Methods for Handling Deadlocks
• Deadlock Prevention
• Deadlock Avoidance
• Deadlock Detection
• Recovery from Deadlock
• Combined Approach to Deadlock Handling
The Deadlock Problem
• A set of blocked processes each holding a resource and
waiting to acquire a resource held by another process in the
set.
• Example
—System has 2 tape drives.
—P1 and P2 each hold one tape drive and each needs another
one.
• Example
—semaphores A and B, initialized to 1
P0 P1
wait (A); wait(B)
wait (B); wait(A)
Bridge Crossing Example
I need I need
quad C quad B
and B and C
I need
I need quad A
quad D and B
and A
Actual Deadlock
HALT HALT
until D is until C is
free free
HALT
HALT until B is
until A is free
free
System Model
• Resource types R1, R2, . . ., Rm
CPU cycles, memory space, I/O devices
• Each resource type Ri has Wi instances.
• Each process utilizes a resource as
follows:
—request
—use
—release
Resource Categories
• Such as:
—Processors, I/O channels, main and secondary
memory, devices, and data structures such as
files, databases, and semaphores
• Deadlock occurs if each process holds one
resource and requests the other
Example of
Reuse Deadlock
Request. 80
. . Kbytes; ...
Request 70 Kbytes;
... ...
Request 80 Kbytes;
Request 60 Kbytes;
• Process
Rj
• Pi requests instance of Rj
Pi
Rj
• Pi is holding an instance of Rj
Resource Allocation
Graphs
Resources
Amount of available
Existing after
Resources allocation
Can any of the 4 processes run to completion with resources available?
Process i
Note P2 is
completed
After P1 completes
P3 Completes
This time
Suppose that
P1 makes the
request for
one additional
unit each of
R1 and R3.
Is this safe?
Deadlock Avoidance
• Detection algorithm
• Recovery scheme
Single Instance of Each Resource
Type
• Maintain wait-for graph
— Nodes are processes.
— Pi Pj if Pi is waiting for Pj.