chubby中使用paxos的分析

chubby使用paxos作为日志错误容错的复制算法,在协议栈的最底层,paxos算法确保了每个replica的本地日志都有相同的entries,replicas的通信则是通过paxos-specific protocal,一旦某个值进入容错日志,每个replica会调用发送一个callback给客户端应用程序,告诉这个已提交的值

chubby的paxos描述【隐藏了propose和promise的消息发送】

1. 选出一个coordinator.

2. 选择一个value,并广播给所有的replicas,该消息称为accept message,其他replicas不是acknowledge message就是reject it

3. 一旦收到大多数的acknowledge, coordinator就广播commit 消息 to notify replicas

选择coordinator?

在chubby中选择coordinator上,paxos确保在某一个值上【选出一个coordinator】达成一致通过引入两种机制:

1)分配一个有序编号给后续的coordinators【通过产生一个最新的编号来成为coordinator, 如Replica r 的编号为s mod n = Ir , 然后broadcasts all replicas in a propose message 等待replicas 回复 promise】

2)严格限制每个coordinator的选值【什么意思呢?在选择coordinator的时候,使用的是fully paxos,即二阶段paxos,如果在第一阶段后收到一个已确定的值,则使用该值(使用该coordinator)】

与ZK的一点区别?

chubby可以不断轮换coordinator。原文描述为if consensus was achieved by a previous coordinator, the new coordinator is guaranteed to hear about the value decided upon from at least one replica. By induction, that value will have the highest sequence number of all responses received, and so will be selected by the new coordinator. 这样可以轮流换coordinator,这点和ZK还是有区别,ZK一旦leader确定下载,如果不失败的话是不会改变的,但chubby可以不断轮换coordinator

轮换coordinator 怎么解决活锁的问题呢?如果某段时间内有两个replicas一直争抢着要成为coordinator,这个怎么解决?

chubby 解决活锁的办法:In our implementation the master periodically boosts its sequence number by running a full round of the Paxos algorithm, including sending propose messages. Boosting with the right frequency avoids this type of master churn in most cases. 即在希望成为master的时候,是间断性的 right frequency 来避免这一活锁问题

对paxos还有点小不明白的就是并发的情况是怎么样的?有案例吗?

我在看chubby的论文的时候,一直看到那个sequencer的东西,但这个sequencer似乎并不是看得很懂,paxos都是异步的,只要满足序号,并发处理是没有啥问题的

处理拖后腿的replica?

对于拖后腿的replica则让其catch up 那些leading replicas

chubby如何处理优化的?

In a system where replicas are in close network proximity, disk flush time can dominate the overall latency of the implementation.

在replicas都很近的情况下,刷磁盘会很耗代价

使用了lambort的优化made simple的优化:

Propose messages may be omitted if the coordinator identity does not change between instances. This does not interfere with the properties of Paxos because any replica at any time can still try to become coordinator by broadcasting a propose message with a higher sequence number.

为了利用这一优化,所以尽可能的少轮换coordinatorpick a coordinator for long periods of time, trying not to let the coordinator change,但它又说利用了这点每个replica只要写一次磁盘就行了,master在发送accept的消息前写磁盘,其他的replicas在发送acknowledge消息前,写一次磁盘就行了

另一种为增加吞吐量,In order to get additional throughput in a concurrent system, it is possible to batch a collection of values submitted by di?erent application threads into a single Paxos instance

如何处理disk corruption

原因是A disk may be corrupted due to a media failure or due to an operator error (an operator may accidentally erase critical data). When a replica’s disk is corrupted and it loses its persistent state, it may renege on promises it has made to other replicas in the past.

由于不保证它的promise,因此,这违背了paxos的算法的一个关键假设:P2B If a proposal with value v is chosen, then every higher-numbered proposal issued by any proposer has value v. 即如果一个议案被选择了,那么此后,任何proposer提出的议案(编号更高)包含的决议都是v

解决disk corruptions的情况:

Either filele(s) contents may change or file(s) may become inaccessible,解决第一种是将文件checksum存储在文件中 后边的问题检测是replica在第一次启动后存放一个marker在GFS中,如果再次启动发现一个empty disk 并且在GFS中有这个标示,则说明是corrupted disk

重建replica的状态  It participates in Paxos as a non-voting member; meaning that it uses the catch-up mechanism to catch up but does not respond with promise or acknowledgment messages. It remains in this state until it observes one complete instance of Paxos that was started after the replica started rebuilding its state. By waiting for the extra instance of Paxos, we ensure that this replica could not have reneged on an earlier promise

另外,论文还说到,利用这一机制还能降低系统的延迟,即可以接受不需要立即刷磁盘。但他们也说还没有实现,不知道现在有没有实现好

master lease的问题

as long as the master has the lease, it is guaranteed that other replicas cannot successfully submit values to Paxos. Thus a master with the lease has up-to-date information in its local data structure which can be used to serve a read operation purely locally. By making the master attempt to renew its lease before it expires we can ensure that a master has a lease most of the time. With our system, masters successfully maintain leases for several days at a time.

意思是如果master有lease的话,则其他的replicas不能重新选coordinator,这里的coordinator和master其实是同一个概念,即每一次只有一个master,然后他们的系统中说master一次成功维持了几天。通过master lease 这样 read 操作总是能读到本地最新的数据,但,是不是也可能读到stale的值呢?我想应该也是有可能的,比如我刚提交的一个值,我马上读,这有可能就读不到,这点其实和ZK应该是一样的,区别应该还是在于chubby可以要求成为master,当然,chubby后面也说了,基于这个的lease机制,如果replicas使用的话,也可以让clients直接从本地replicas上读,说还没有实现。

master turnover以及abort会有什么影响?那如何检测master turnover和abort operations?

似乎对客户端也没有什么影响,只是需要告诉客户端master改变了,你应该发送请求到另外的一个master去,如果返回epoch number相同表明没有变换过

chubby 如何通知客户端master是哪个?

每个replica都知道的

关于group membership

对于group membership似乎还不是很清楚,chubby中也只是说文献没有指出 paxos也没有证明使用该算法来实现group membership的正确性,基本上在该文献中是一笔带过,回头看看其他文献有没有说明这个东西

snapshots的问题

两个问题: it requires unbounded amounts of disk space; and perhaps worse, it may result in unbounded recovery time since a recovering replica has to replay a potentially long log before it has fully caught up with other replicas

所以有必要在某一点上的操作日志对应的内存的数据结构序列化到磁盘,truncate掉之前的日志记录

另外,snapshots的问题并不是由paxos framework来做的,因为its only concern is the consistency of the replicated log. 

the lagging replica asks for and receives the remaining log records from the leading replica to bring it fully up-to-date

关于snapshot的时候必须对应日志号,那么在线的时候应该怎么处理?以下是论文中的说法

Our first implementation of the fault-tolerant database blocked the system very briefly while making an in-memory copy of the (small) database. It then stored the copied data on disk via a separate thread. Subsequently we implemented virtually pause-less snapshots. We now use a “shadow” data structure to track updates while the underlying database is serialized to disk.

如果直接accept消息而不是完全paxos的算法会有什么问题吗?

这回到了为什么要使用两阶段的问题,按lambort的说话,paxos made simple 中说第一阶段保证了某一proposal被选择了,第二阶段保证最高的proposal对应的值都是value

chubby使用的是TCP还是UDP?

原文这样描述:This would seem ideal, except that it introduced a tension in our choice of protocol. TCP’s back off policies pay no attention to higher-level timeouts such as Chubby leases, so TCP-based KeepAlives led to many lost sessions at times of high network congestion. We were forced to send KeepAlive RPCs via UDP rather than TCP; UDP has no congestion avoidance mechanisms, so we would prefer to use UDP only when high-level timebounds must be met.

引用

chubby的介绍论文 http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en//archive/chubby-osdi06.pdf

chubby的实现论文,Paxos Made Live – An Engineering Perspective http://www.cs.ucla.edu/~kohler/class/08w-dsi/chandra07paxos.pdf

点赞