chubby 分析(未完待续)

关键词解释

coarse-grained: an application might use a lock to elect a primary, which would then handle all access to that data for a considerable time, perhaps hours or days. An application might partition its locks into groups and use Chubby’s coarse-grained locks to allocate these lock groups to application-specific lock servers

low-volume

a distributed file system

advisory locks

fine-grained(do not expect):in which they might be held only for a short duration (seconds or less);

关键语句:

a). the design emphasis is on availability and reliability, as opposed to high performance.

b). The paper describes the initial design and expected use, compares it with actual use, and explains how the design had to be modified to accommodate the difference

c). Paxos maintains safety without timing assumptions, but clocks must be introduced to ensure liveness; this overcomes the impossibility result of Fischer

d) .we must allow thousands of clients to observe this file, preferably without needing many servers.

Clients and replicas of a replicated service may wish to know when the service’s primary changes. This suggests that an event notification mechanism would be useful to avoid polling.


什么意思?

Most Chubby cells are confined to a single data centre or machine room, though we do run at least one Chubby cell whose replicas are separated by thousands of kilometres.

name service?

We have found this technique easier than making existing servers participate in a consensus protocol, and especially so if compatibility must be maintained during a transition period

 However, assuming a consensus service is not used exclusively to provide locks (which reduces it to a lock service), this approach solves none of the other problems described above

To avoid both financial loss and jail time, we provide security mechanisms, including access control

Coarse-grained locks impose far less load on the lock server. In particular, the lock-acquisition rate is usually only weakly related to the transaction rate of the client applications. Coarse-grained locks are acquired only rarely, so temporary lock server unavailability delays clients less. On the other hand, the transfer of a lock from client to client may require costly recovery procedures, so one would not wish a fail-over of a lock server to cause locks to be lost. Thus, it is good for coarsegrained locks to survive lock server failures, there is little concern about the overhead of doing so, and such locks allow many clients to be adequately served by a modest
number of lock servers with somewhat lower availability.

Fine-grained locks lead to different conclusions. Even brief unavailability of the lock server may cause many clients to stall. Performance and the ability to add new
servers at will are of great concern because the transaction rate at the lock service grows with the combined transaction rate of clients. It can be advantageous to reduce the overhead of locking by not maintaining locks across lock server failure, and the time penalty for dropping locks every so often is not severe because locks are
held for short periods. (Clients must be prepared to lose locks during network partitions, so the loss of locks on lock server fail-over introduces no new recovery paths.)


设计chubby的理由

First, our developers sometimes do not plan for high availability in the way one would wish. Often their systems start as prototypes with little load and loose availability guarantees; invariably the code has not been specially structured for use with a consensus protocol. As the service matures and gains clients, availability becomes more important; replication and primary election are then added to an existing design. While this could be done with a library that provides distributed consensus, a lock server makes it easier to maintain existing program structure and communication patterns. For example, to elect a master which then writes to an existing file server requires adding just two statements and one RPC parameter to an existing system: One would acquire a lock to become master, pass an additional integer (the lock acquisition count) with the write RPC, and add an if-statement to the file server to reject the write if the acquisition count is lower than the current value (to guard against delayed packets). We have found this technique easier than making existing servers participate in a consensus protocol, and especially so if compatibility must be maintained during a transition period. 

Second, many of our services that elect a primary or that partition data between their components need a mechanism for advertising the results. This suggests that we should allow clients to store and fetch small quantities of data—that is, to read and write small files. This could be done with a name service, but our experience

has been that the lock service itself is well-suited for this task, both because this reduces the number of servers on which a client depends, and because the consistency features of the protocol are shared. Chubby’s success as a name server owes much to its use of consistent client caching, rather than time-based caching. In particular,
we found that developers greatly appreciated not having to choose a cache timeout such as the DNS time-to-live value, which if chosen poorly can lead to high DNS load,
or long client fail-over time

Third, a lock-based interface is more familiar to our programmers.

Last, distributed-consensus algorithms use quorums to make decisions, so they use several replicas to achieve high availability


为什么使用锁而不是使用a library 或者service for consensus?


目标:

 The primary goals included reliability, availability to a moderately large set of clients, and easy-to-understand semantics; throughput and storage capacity were considered secondary

help developers deal with coarse-grained synchronization within their systems, and in particular to deal with the problem of electing a leader from among a set of otherwise equivalent servers

客户端使用接口:

Chubby’s client interface is similar to that of a simple file system that performs whole-file reads and writes, augmented with advisory locks and with notification of various events such as file modification

使用例子:

 the Google File System  uses a Chubby lock to appoint a GFS master server

Bigtable  uses Chubby in several ways: to elect a master, to allow the master to discover the servers it controls, and to permit clients to find the master. 

both GFS and Bigtable use Chubby as a well-known and available location to store a small amount of meta-data; 

in effect they use Chubby as the root of their distributed data structures. 

Some services use locks to partition work (at a coarse grain) between several server

to elect a master which then writes to an existing file server requires adding just two statements and one RPC parameter to an existing system: One would acquire a lock to become master, pass an additional integer (the lock acquisition count) with the write RPC, and add an if-statement to the file server to reject the write if the acquisition count is lower than the current value (to guard against delayed packets) 使用一个整数作为参数调用RPC接口,如果获取的count比当前的value要小,则拒绝向server写。


没有chubby之前:

1. most distributed systems at Google used ad hoc methods for primary election (when work could be duplicated without harm), 有了chubby,则减少了一部分计算代价

2. required operator intervention (when correctness was essential). 有了chubby后,it achieved a significant improvement in availability in systems that no longer required human intervention on failure.


熟悉分布式系统的知道需要一个主来解决分布式一致性问题,all working protocols for asynchronous consensus we have so far encountered have Paxos at their core


chubby master的选举

The replicas use a distributed consensus protocol to elect a master  the master must obtain votes from a majority of the replicas, plus promises that those replicas
will not elect a different master for an interval of a few seconds known as the master lease. The master lease is periodically renewed by the replicas provided the master
continues to win a majority of the vote.这个和ZK还有点不一样,ZK在选举后似乎没有一个lease的概念

If a master fails, the other replicas run the election protocol when their master leases expire; a new master will typically be elected in a few seconds. ZK的话似乎不必这样等待,ZK有个心跳以及使用TCP来保持Leader与Followers的可靠连接

replica失败

If a replica fails and does not recover for a few hours, a simple replacement system selects a fresh machine from a free pool and starts the lock server binary on it. It then
updates the DNS tables, replacing the IP address of the failed replica with that of the new one. The current master polls the DNS periodically and eventually notices the
change. It then updates the list of the cell’s members in the cell’s database; this list is kept consistent across all the members via the normal replication protocol.

从这点上来看,chubby可以从free pool选择一个新的机器来替换一个几个小时都无法恢复过来的replica,似乎ZK不能配置这样的吧?不过,并不是很清楚,回头去看下它到底可不可以的



clients怎么找到master的?

Clients find the master by sending master location requests to the replicas listed in the DNS

文件以及文件控制

access to a file is controlled by the permissions on the file itself rather than on directories on the path leading to the file, ZK目录和文件是合在一起的

The name space contains only files and directories, collectively called nodes

clients的操作

Once a client has located the master, the client directs all requests to it either until it ceases to respond, or until it indicates that it is no longer the master;Write requests are propagated via the consensus protocol to all replicas; such requests are acknowledged when the write has reached a majority of the replicas in the cell. Read requests are satisfied by the master alone; 从这一点上看,和ZK也是有点不一样的,ZK client的读与写可以发给ZK集羣中的任何一个节点,写转发给Leader,让leader做两阶段请求,3个消息延迟

锁操作

either one client handle may hold the lock in exclusive (writer) mode, or any number of client handles may hold the lock in shared (reader) mode. Like the mutexes
known to most programmers, locks are advisory.

We rejected mandatory locks, which make locked objects inaccessible to clients not holding their locks:



其他使用同样paxos协议设计的系统

 OKI, B., AND LISKOV, B. Viewstamped replication: A general primary copy method to support highly-available distributed systems. In ACM PODC (1988)

LAMPSON, B. W. How to build a highly available system using consensus. In Distributed Algorithms, vol. 1151 of LNCS. Springer–Verlag, 1996, pp. 1–17.


点赞