UNIT-4 Distributed Scheduling
UNIT-4 Distributed Scheduling
UNIT-4 Distributed Scheduling
User feels that all of them run simultaneously, however operating system allocates one process at a time.
Distributed Scheduling
Location policy
Threshold method
Policy selects a random node, checks whether the node is able to
receive the process, then transfers the process. If node rejects,
another node is selected randomly. This continues until probe limit
is reached.
Shortest method
Distinct nodes are chosen at random, each is polled to determine
its load. The process is transferred to the node having the
minimum value unless its workload value prohibits to accept the
process.
Simple improvement is to discontinue probing whenever a node with
zero load is encountered
Location policy
Bidding method
Nodes contain managers (to send processes) and contractors (to
receive processes)
Managers broadcast a request for bid, contractors respond with
bids (prices based on capacity of the contractor node) and manager
selects the best offer
Winning contractor is notified and asked whether it accepts the
process for execution or not
Location policy
Pairing
Contrary to the former methods the pairing policy is to reduce the
variance of load only between pairs
Each node asks some randomly chosen node to form a pair with it
If it receives a rejection it randomly selects another node and tries
to pair again
Two nodes that differ greatly in load are temporarily paired with
each other and migration starts
The pair is broken as soon as the migration is over
A node only tries to find a partner if it has at least two processes
Load-sharing approach
Drawbacks of Load-balancing approach
Load balancing technique with attempting equalizing the workload
on all the nodes is not an appropriate object since big overhead is
generated by gathering exact state information
Load balancing is not achievable since number of processes in a
node is always fluctuating and temporal unbalance among the
nodes exists every moment
Basic ideas for Load-sharing approach
It is necessary and sufficient to prevent nodes from being idle
while some other nodes have more than two processes
Load-sharing is much simpler than load-balancing since it only
attempts to ensure that no node is idle when heavily node exists
Priority assignment policy and migration limiting policy are the
same as that for the load-balancing algorithms
Location policy
Location policy decides whether the sender node or the receiver node
of the process takes the initiative to search for suitable node in the
system, and this policy can be the following:
Sender-initiated location policy
Sender node decides where to send the process
Heavily loaded nodes search for lightly loaded nodes
Receiver-initiated location policy
Receiver node decides from where to get the process
Lightly loaded nodes search for heavily loaded nodes
Sender-initiated location policy
Node becomes overloaded, it either broadcasts or randomly probes
the other nodes one by one to find a node that is able to receive
remote processes
When broadcasting, suitable node is known as soon as reply
arrives
Classification/Types
of Load Distribution Algorithms
Receiver initiated
Load sharing process initiated by a lightly loaded node
Transfer Policy: Threshold based.
Selection Policy: Can be anything
Location Policy: Receiver selects up to N nodes and polls them,
transferring task from the first sender. If none are found, wait for a
predetermined time, check load and try again
Information Policy
Symmetric Algorithms
Simple idea combine the previous two. One works well at high
loads, the other at low loads.
Above Average Algorithm: Keep load within a range
Transfer Policy: maintain 2 thresholds equidistant from
average. Nodes with load > upper are senders, Nodes with
load < lower are receivers.
Location Policy: Sender Initiated:
Sender broadcasts too high message and sets up too
high alarm
Receiver getting too high message replies with accept,
cancels its too low alarm, starts an awaiting task alarm,
and increments load value
Sender which gets accept message will transfer task as
appropriate. If it gets too low message, it responds with a
too high to the sender.
If no accept has been received within timeout, send out
change average message.
Adaptive Algorithms
Stable Symmetric Algorithm.
Use information gathered during polling to change behaviour.
Start by assuming that everyone is a receiver.
Transfer Policy: Range based with Upper and Lower Threshold
Location Policy: Sender Initiated component polls node at head of
receiver list. Depending on answer, either a task is transferred or
node moved to OK or sender list. Same thing happens at the
receiving end. Receiver initiated component polls senders list in
order, OK list and receivers list in reverse order. Nodes are moved
in and out of lists at sender and receiver.
Selection Policy any, Information Policy Demand driven
At high loads, receiver lists get empty preventing future polling
and deactivating sender component. At low loads, receiver
initiated polling is deactivated, but not before updating receiver
lists.
Task Migration
Task Migration refers to the transfer of a task that has already begin
execution by a new location and continuing its execution there to
migrate a partially executed task to a new location , the tasks state
should be made available at the new location. The steps involved in
task migration are:
Task transfer: The transfer of tasks state to the new machine.
Unfreeze: This task is installed at the new machine and is put in the
ready queue., so that it can continue executing.
Traffic Shaping
Another method of congestion control is to shape the
traffic before it enters the network.
Traffic shaping controls the rate at which packets are sent
(not just how many). Used in ATM and Integrated Services
networks.
At connection set-up time, the sender and carrier negotiate
a traffic pattern (shape).
Two traffic shaping algorithms are:
Leaky Bucket
Token Bucket
42
43
(a) A leaky bucket with water. (b) a leaky bucket with packets.
44
45
46
5-34
(a) Before.
(b) After.
47
48
DEAD LOCK
A process requests resources; if the resources are not available at that
time, the process enters a wait state. It may happen that waiting
processes will never again change state, because the resources they
have requested are held by other waiting processes. This situation is
called deadlock.
DEADLOCK CHARACTERIZATION
A deadlock situation can arise if the following four conditions
hold simultaneously in a system:
1. Mutual exclusion: At least one resource must be held in a nonsharable mode; that is, only one process at a time can use the resource.
If another process requests that resource, the requesting process must
be delayed until the resource has been released.
2. Hold and wait : There must exist a process that is holding at least
one resource and is waiting to acquire additional resources that are
currently being held by other processes.
3. No preemption : Resources cannot be preempted; that is, a
resource can be released only voluntarily by the process holding it,
after that process, has completed its task.
4. Circular wait: There must exist a set {P0, P1, ..., Pn } of waiting
processes such that P0 is waiting for a resource that is held by P1, P1
is waiting for a resource that is held by P2, ., Pn-1 is waiting for a
resource that is held by Pn, and Pn is waiting for a resource that is held
by P0.