Distributed Process Management: Operating Systems: Internals and Design Principles, 6/E
Distributed Process Management: Operating Systems: Internals and Design Principles, 6/E
Distributed Process Management: Operating Systems: Internals and Design Principles, 6/E
Roadmap
Process Migration Distributed Global States Distributed Mutual Exclusion Distributed Deadlock
Process Migration
Transfer of sufficient amount of the state of a process from one computer to another The process executes on the target machine
Motivation
Process migration is desirable in distributed computing for several reasons including:
Load sharing Communications performance Availability Utilizing special capabilities
Load Sharing
Move processes from heavily loaded to lightly load systems
Significant improvements are possible Must be careful that the communications overhead does not exceed the performance gained.
Communications performance
Processes that interact intensively can be moved to the same node to reduce communications cost May be better to move process to the data than vice versa
Especially when the data is larger than the size of the process
Migration issues
For process migration to work we need to satisfy a few issues:
Who initiates the migration? What is involved in a Migration? What portion of the process is migrated? What happens to outstanding messages and signals?
Process initiates
If the goal is to access a particular resource Process must be aware of the distributed system
Process image and process control block and any links must be moved
What is migrated?
Moving the process control block is simple Several strategies exist for moving the address space and data including:
Eager (All) Precopy Eager (dirty) Copy-on-reference Flushing
Eager (all)
Transfer entire address space
No trace of process is left behind If address space is large and if the process does not need most of it, then this approach my be unnecessarily expensive (taking minutes)
Precopy
Process continues to execute on the source node while the address space is copied
Pages modified on the source during precopy operation have to be copied a second time Reduces the time that a process is frozen and cannot execute during migration
Eager (dirty)
Transfer only that portion of the address space that is in main memory and have been modified
Any additional blocks of the virtual address space are transferred on demand
Copy-on-reference
Variation of Eager(All) Pages are only brought over when referenced
Has lowest initial cost of process migration
Flushing
Pages are cleared from main memory by flushing dirty pages to disk Pages are accessed as needed from disk
Relieves the source of holding any pages of the migrated process in main memory
Choosing a Strategy
If the process is not using much address space while on the target machine then better to use
Eager (All) Precopy Eager (dirty)
Otherwise use
Copy-on-reference Flushing
Decision to Migrate
Decision to migrate may be made by a single entity
OS may decide based on load monitoring module Process may decide based on resource needs
Migration by Negotiation
Charlotte is an example system. Migration policy is responsibility of a Starter utility
Starter utility is also responsible for long-term scheduling and memory allocation
Eviction
Destination system may refuse to accept the migration of a process to itself If a workstation is idle, process may have been migrated to it
Once the workstation is active, it may be necessary to evict the migrated processes to provide adequate response time
Nonpreemptive process transfers involve processes which have not yet begun
So have no state to transfer Useful in load balancing.
Roadmap
Process Migration Distributed Global States Distributed Mutual Exclusion Distributed Deadlock
Example
Bank account is distributed over two branches The total amount in the account is the sum at each branch At 3 PM the account balance is determined Messages are sent to request the information
Example 1
Example 2
If at the time of balance determination, the balance from branch A is in transit to branch B The result is a false reading
Example 3
All messages in transit must be examined at time of observation Total consists of balance at both branches and amount in message
Some Terms
Channel
Exists between two processes if they exchange messages
State
Sequence of messages that have been sent and received along channels incident with the process
Some Terms
Snapshot
Records the state of a process
Global state
The combined state of all processes
Distributed Snapshot
A collection of snapshots, one for each process
Roadmap
Process Migration Distributed Global States Distributed Mutual Exclusion Distributed Deadlock
Critical Section
Non-sharable resources are critical resources
Code accessing it is a critical section of a program
Mutual Exclusion
Distributed Algorithm
1. All nodes have equal amount of information, on average 2. Each node has only a partial picture of the total system and must make decisions based on this information 3. All nodes bear equal responsibility for the final decision
Distributed Algorithm
4. All nodes expend equal effort, on average, in effecting a final decision 5. Failure of a node, in general, does not result in a total system collapse 6. There exits no system-wide common clock with which to regulate the time of events
Timestamping
Each system on the network maintains a counter which functions as a clock Each site has a numerical identifier When a message is received, the receiving system sets is clock to one more than the maximum of its current value and the incoming timestamp
Receiving System
The receiving system j increments its clock to one more than the maximum of its current value and the incoming timestamp:
Cj 1 + max[Cj, Ti ]
Timestamping
Timestamping
Distributed Queue
An early approach to distributed mutual exclusion uses a distributed queue Relied on several assumptions
1. Each node (1..N) has 1 process to manage mutual exclusion 2. Messages arrive in the same order as sent (this assumption relaxed in later versions) 3. Every message is successfully received 4. Network is fully connected
State Diagram
Token-Passing Approach
Pass a token among the participating processes The token is an entity that at any time is held by one process
The process holding the token may enter its critical section without asking permission When a process leaves its critical section, it passes the token to another process
Token-Passing
Token Passing
Roadmap
Process Migration Distributed Global States Distributed Mutual Exclusion Distributed Deadlock
Deadlock
The permanent blocking of a set of processes that either compete for system resources or communicate with one another.
Definition valid for distributed systems as much as for single More complex as no node as knowledge of the state of the overall system
Types of Deadlock
Those related to resource allocation
Multiple processes attempt to access the same resource
Communication of messages
Each process in a set is waiting for a message from another process in the set, No process sends a message.
In a distributed system there is no control process with an up-to-date knowledge of the global state of the system.
Phantom Deadlock
Deadlock Prevention
Circular-wait condition can be prevented by defining a linear ordering of resource types Hold-and-wait condition can be prevented by requiring that a process request all of its required resource at one time, and blocking the process until all requests can be granted simultaneously
Deadlock Avoidance
Distributed deadlock avoidance is impractical
Every node must keep track of the global state of the system The process of checking for a safe global state must be mutually exclusive Checking for safe states involves considerable processing overhead
Deadlock Detection
Processes are allowed to obtain free resources as they wish, and the existence of a deadlock is determined after the fact. Each site only knows about its own resources
But deadlock may involve distributed resources.
Deadlock Detection
Centralized control
one site is responsible for deadlock detection
Hierarchical control
sites organized in a tree structure
Distributed control
all processes cooperate in the deadlock detection function
For each node, the queue to the adjacent node in one direction is full with packets destined for the next node beyond
Communication Deadlock