Download presentation
Presentation is loading. Please wait.
Published byAgatha Kennedy Modified over 9 years ago
1
Distributed Systems Principles and Paradigms Chapter 10 Distributed File Systems 01 Introduction 02 Communication 03 Processes 04 Naming 05 Synchronization 06 Consistency and Replication 07 Fault Tolerance 08 Security 09 Distributed Object-Based Systems 10 Distributed File Systems 11 Distributed Document-Based Systems 12 Distributed Coordination-Based Systems 00 – 1 /
2
Sun NFS Coda Distributed File Systems 10 – 1 Distributed File Systems/
3
10 – 2 Distributed File Systems/10.1 NFS Sun NFS Sun Network File System: Now version 3, version 4 is coming up. Basic model: Remote file service: try to make a file system transparently available to remote clients. Follows remote access model (a) instead of upload/download model (b):
4
10 – 3 Distributed File Systems/10.1 NFS NFS is implemented using the Virtual File System abstraction, which is now used for lots of different operating systems: Essence: VFS provides standard file system interface, and allows to hide difference between accessing local or remote file system. Question: Is NFS actually a file system? NFS Architecture
5
10 – 4 Distributed File Systems/10.1 NFS Question: Anything unusual between v3 and v4? NFS File Operations
6
10 – 5 Distributed File Systems/10.1 NFS Essence: All communication is based on the (besteffort) Open Network Computing RPC (ONC RPC). Version 4 now also supports compound procedures: (a) Normal RPC (b) Compound RPC: first failure breaks execution ofrest of the RPC Question: What’s the use of compound RPCs? Communication in NFS
7
10 – 6 Distributed File Systems/10.1 NFS Essence: NFS provides support for mounting remote file systems (and even directories) into a client’s local name space: Naming in NFS (1/2) Watch it: Different clients may have different local name spaces. This may make file sharing extremely difficult (Why?). Question: What are the solutions to this problem?
8
10 – 7 Distributed File Systems/10.1 NFS Note: A server cannot export an imported directory. The client must mount the server-imported directory: Naming in NFS (2/2)
9
10 – 8 Distributed File Systems/10.1 NFS Problem: To share files, we partly standardize local name spaces and mount shared directories. Mounting very large directories (e.g., all subdirectories in home/users) takes a lot of time (Why?). Solution: Mount on demand — automounting Automounting in NFS Question: What’s the main drawback of having the automounter in the loop?
10
10 – 9 Distributed File Systems/10.1 NFS File Sharing Semantics (1/2) Problem: When dealing with distributed file systems, we need to take into account the ordering of concurrent read/write operations, and expected semantics (=consistency).
11
10 – 10 Distributed File Systems/10.1 NFS File Sharing Semantics (2/2) UNIX semantics: a read operation returns the effect of the last write operation can only be implemented for remote access models in which there is only a single copy of the file Transaction semantics: the file system supports transactions on a single file issue is how to allow concurrent access to a physically distributed file Session semantics: the effects of read and write operations are seen only to the client that has opened (a local copy) of the file what happens when a file is closed (only one client may actually win)
12
10 – 11 Distributed File Systems/10.1 NFS Observation: It could have been simple, but it isn’t. NFS supports an explicit locking protocol (stateful), but also an implicit share reservation approach: File Locking in NFS Question: What’s the use of these share reservations?
13
10 – 12 Distributed File Systems/10.1 NFS Essence: Clients are on their own. Open delegation: Server will explicitly permit a client machine to handle local operations from other clients on that machine. Good for performance. Does require that the server can take over when necessary: Caching & Replication Question: Would this scheme fit into v3 Question: What kind of file access model are we dealing with?
14
10 – 13 Distributed File Systems/10.1 NFS Important: Until v4, fault tolerance was easy due to the stateless servers. Now, problems come from the use of an unreliable RPC mechanism, but also stateful servers that have delegated matters to clients. RPC: Cannot detect duplicates. Solution: use a duplicaterequest cache: Fault Tolerance Locking/Open delegation: Essentially, recovered server offers clients a grace period to reclaim locks. When period is over, the server starts its normal local manager function again.
15
10 – 14 Distributed File Systems/10.1 NFS Security Essence: Set up a secure RPC channel between client and server: Secure NFS: Use Diffie-Hellman key exchange to set up a secure channel. However, it uses only 192-bit keys, which have shown to be easy to break. RPCSEC GSS: A standard interface that allows integration with existing security services:
16
10 – 15 Distributed File Systems/10.2 Coda Developed in the 90s as a descendant of the Andrew File System (CMU) Now shipped with Linux distributions (after 10 years!) Emphasis: support for mobile computing, in particular disconnected operation. Coda File System
17
10 – 16 Distributed File Systems/10.2 Coda Note: The core of the client machine is the Venus process. Note that most stuff is at user level. Coda Architecture
18
10 – 17 Distributed File Systems/10.2 Coda Essence: All client-server communication (and server-server communication) is handled by means of a reliable RPC subsystem. Coda RPC supports side effects: Communication in Coda (1/2) Note: side effects allows for separate protocol to handle, e.g., multimedia streams.
19
10 – 18 Distributed File Systems/10.2 Coda Issue: Coda servers allow clients to cache whole files. Modifications by other clients are notified through invalidation messages there is a need for multicast RPC: Communication in Coda (2/2) (a) Sequential RPCs (b) Multicast RPCs Question: Why do multi RPCs really help?
20
10 – 19 Distributed File Systems/10.2 Coda Essence: Similar remote mounting mechanism as in NFS, except that there is a shared name space between all clients: Naming in Coda
21
10 – 20 Distributed File Systems/10.2 Coda Background: Coda assumes that files may be replicated between servers. Issue becomes to track a file in a location-transparent way: File Handles in Coda Files are contained in a volume (cf. to UNIX file system on disk) Volumes have a Replicated Volume Identifier (RVID) Volumes may be replicated; physical volume has a VID
22
10 – 21 Distributed File Systems/10.2 Coda Essence: Coda assumes transactional semantics, but without the full-fledged capabilities of real transactions. File Sharing Semantics in Coda Note: Transactional issues reappear in the form of “this ordering could have taken place.”
23
10 – 22 Distributed File Systems/10.2 Coda Essence: Combined with the transactional semantics, we obtain flexibility when it comes to letting clients operate on local copies: Caching in Coda Note: A writer can continue to work on its local copy; a reader will have to get a fresh copy on the next open. Question: Would it be OK if the reader continued to use its own local copy?
24
10 – 23 Distributed File Systems/10.2 Coda Server Replication in Coda (1/2) Essence: Coda uses ROWA for server replication: Files are grouped into volumes (cf. traditional UNIX file system) Collection of servers replicating the same volume form that volume’s Volume Storage Group) Writes are propagated to a file’s VSG Reads are done from one server in a file’s VSG Problem: what to do when the VSG partitions and partition is later healed?
25
10 – 24 Distributed File Systems/10.2 Coda Server Replication in Coda (2/2) Solution: Detect inconsistencies using version vectors: CVV i ( f )[ j ] = k means that server S i knows that server S j has seen version k of file f. When a client reads file f from server S i, it receives CVV i ( f ). Updates are multicast to all reachable servers (client’s accessible VSG), which increment their CVV i ( f )[ i ]. When the partition is restored, comparison of version vectors will allow detection of conflicts and possible reconciliation. Note: the client informs a server about the servers in the AVSG where the update has also taken place.
26
10 – 25 Distributed File Systems/10.2 Coda Fault Tolerance Note: Coda achieves high availability through clientside caching and server replication Disconnected operation: When a client is no longer connected to one of the servers, it may continue with the copies of files that it has cached. Requires that the cache is properly filled (hoarding). Compute a priority for each file Bring the user’s cache into equilibrium (hoard walk): –There is no uncached file with higher priority than a cached file –The cache is full or no uncached file has nonzero priority –Each cached file is a copy of a file maintained by the client’s AVSG Note: Disconnected operation works best when there is hardly any write-sharing.
27
10 – 26 Distributed File Systems/10.2 Coda Security Essence: All communication is based on a secure RPC mechanism that uses secret keys. When logging into the system, a client receives: A clear token CT from an AS (containing a generated shared secret key K S ). CT has time-limited validity. A secret token ST = K vice ([ CT ] * K vice ), which is an encrypted and cryptographically sealed version of CT.
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.