一次HBase集群崩溃的修复历程

【前言】最近部门因为数据安全问题需要迁移线上生产环境HBase集群的网段,在集群重启后发生如下两个错误,导致HBase集群无法正常恢复:(1)HMaster节点自动失败;(2)部分Region出现offline和RIT。因为是线上生产环境,涉及到公司七八十个业务线,因此必须要在规定时间内保证集群可用,且保证用户数据不丢失。

【HMaster节点自动失败】

现象:HA HMaster节点启动了,过一段时间Active HBase Master节点自动失败(大概3~5分钟)。因为集群采用了HA高可用,因此Standby HBase Master节点自动切换为Active。再过差不多相同时间,该节点也自动失败。

查看Master节点的日志报错如下(Failed to become active master java.io.IOException: Timedout 300000ms waiting for namespace table to be assigned):

2018-12-23 01:12:56,617 FATAL [master2:16000.activeMasterManager] (org.apache.hadoop.hbase.master.HMaster$1.run:1650)  - Failed to become active master
java.io.IOException: Timedout 300000ms waiting for namespace table to be assigned
        at org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:104)
        at org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:985)
        at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:779)
        at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:182)
        at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1646)
        at java.lang.Thread.run(Thread.java:745)
2018-12-23 01:12:56,620 FATAL [master2:16000.activeMasterManager] (org.apache.hadoop.hbase.master.HMaster.abort:2095)  - Master server abort: loaded coprocessors are: []
2018-12-23 01:12:56,620 FATAL [master2:16000.activeMasterManager] (org.apache.hadoop.hbase.master.HMaster.abort:2098)  - Unhandled exception. Starting shutdown.
java.io.IOException: Timedout 300000ms waiting for namespace table to be assigned
        at org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:104)
        at org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:985)
        at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:779)
        at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:182)
        at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1646)
        at java.lang.Thread.run(Thread.java:745)

继续往下查看具体的报错内容:

018-12-23 01:12:56,655 ERROR [MASTER_SERVER_OPERATIONS-master2:16000-3] (org.apache.hadoop.hbase.executor.EventHandler.handleException:226)  - Caught throwable while processing event M_SERVER_SHUTDOWN
java.io.IOException: failed log splitting for hbase2.yun,16020,1543323308010, will retry
        at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.resubmit(ServerShutdownHandler.java:356)
        at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:220)
        at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: error or interrupted while splitting logs in [hdfs://hxcs/apps/hbase/data/WALs/hbase2.yun,16020,1543323308010-splitting] Task = installed = 510 done = 79 error = 0
        at org.apache.hadoop.hbase.master.SplitLogManager.splitLogDistributed(SplitLogManager.java:290)
        at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:391)
        at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:364)
        at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:286)
        at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:213)
        ... 4 more
2018-12-23 01:12:56,656 WARN  [MASTER_SERVER_OPERATIONS-master2:16000-0] (org.apache.hadoop.hbase.master.SplitLogManager.waitForSplittingCompletion:368)  - Stopped while waiting for log splits to be completed

修复方案:HBase集群在执行split操作时被维护人员停止(这里可以确保HBase集群是被正常stop),导致集群重启时无法恢复之前的WAL split文件(真实原因定位需要后续复盘测试)。既然HMaster节点是不能恢复WAL split文件才导致的异常退出,那我们先移除/apps/hbase/data/WALs/下的所有WAL文件,确保集群能够正常启动。重启HBase集群后,HMaster节点恢复正常,未出现自动失败现象。
相同案例https://community.hortonworks.com/questions/33140/hbase-master-fails-to-start.html
注意:WAL文件移除会导致数据丢失,后续需要恢复该部分数据,因此千万要备份。

【部分Region出现offline和RIT】

现象:HMaster节点恢复正常后,集群开始恢复Region数据。观察Web UI界面,发现有大量Offline Regions和RIT(Region-In-Transition)。

查看RegionServer节点的日志发现频繁出现如下类似错误:

2018-12-23 02:53:59,912 INFO  [CachedHtables-pool22-t1] (org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.resubmit:1169)  - #92, table=STOCKDIARY_INDEX_2, attempt=118/350 failed=1ops, last exception: org.apache.hadoop.hbase.exceptions.RegionOpeningExce
ption: org.apache.hadoop.hbase.exceptions.RegionOpeningException: Region STOCKDIARY_INDEX_2,\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1527595214520.356bf32cca660d985087a0041b70129d. is opening on datanode158,16020,1545498423482
        at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2895)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:947)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:1991)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32213)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
        at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
        at java.lang.Thread.run(Thread.java:745)
 on datanode158,16020,1543323316468, tracking started null, retrying after=20152ms, replay=1ops
2018-12-23 02:54:02,420 INFO  [hbase2.yun,16020,1545498428315-recovery-writer--pool6-t11] (org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.waitUntilDone:1566)  - #92, waiting for 1  actions to finish
2018-12-23 02:54:02,925 INFO  [CachedHtables-pool7-t3] (org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.resubmit:1169)  - #40, table=STOCKDIARY_INDEX, attempt=126/350 failed=1ops, last exception: org.apache.hadoop.hbase.NotServingRegionException: org.
apache.hadoop.hbase.NotServingRegionException: Region STOCKDIARY_INDEX,\x09\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1527062480471.31c6c692a383e8f9190338b8ed58da24. is not online on hbase1.yun,16020,1545498434507
        at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2898)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:947)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:1991)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32213)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
        at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
        at java.lang.Thread.run(Thread.java:745)
 on hbase1.yun,16020,1543323426969, tracking started null, retrying after=20015ms, replay=1ops

修复方案:因为HDFS上并未发现数据块丢失的情况,所以用hbase hbck等命令修复均尝试失败。通过查找HDP官网发现类似案例(案例环境与集群完全类似),是因为open region的线程出现死锁导致,可以通过提高线程数来解决,修改配置并重启HBase集群后恢复正常。
相同案例https://community.hortonworks.com/questions/8757/phoenix-local-indexes.html

修改配置:

<property>
     <name>hbase.regionserver.executor.openregion.threads</name> 
     <value>100</value> 
</property>

【WAL文件的数据恢复】

因为部门HBase集群存储的是线上用户数据,对于数据丢失是零容忍的。前面在解决HMaster节点自动失败的时候移除了WAL文件,因此在集群恢复正常运行后,需要恢复该部分数据。

命令hbase org.apache.hadoop.hbase.mapreduce.WALPlayer [options] <wal inputdir> <tables> [<tableMappings>]
说明:回放WAL文件生成HFile文件,并bulkload HFile文件到对应表中。

//处理所有的表
hbase org.apache.hadoop.hbase.mapreduce.WALPlayer /tmp/apps/hbase/data/WALs/
//处理单张表
hbase org.apache.hadoop.hbase.mapreduce.WALPlayer /tmp/apps/hbase/data/WALs/ table1
//处理多张表
hbase org.apache.hadoop.hbase.mapreduce.WALPlayer /tmp/apps/hbase/data/WALs/ table1,table2,table3

对于使用了Phoenix插件的HBase集群,执行以上命令时会发生如下错误:

java.lang.ClassNotFoundException: org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec

需要在执行命令时添加Phoenix插件的jar包,命令示例如下:

hbase org.apache.hadoop.hbase.mapreduce.WALPlayer -libjars=/usr/data/phoenix-4.9.0-HBase-1.1-server.jar /tmp/apps/hbase/data/WALs/ table1

在测试WALPlayer命令时可能需要备份表数据,以防在执行命令时发生未可预知的错误,命令如下:

//本集群备份
hbase org.apache.hadoop.hbase.mapreduce.CopyTable  -libjars=/usr/data/hbase-client/lib/phoenix-4.9.0-HBase-1.1-server.jar --new.name=newTableName oldTableName
//跨级群备份
hbase org.apache.hadoop.hbase.mapreduce.CopyTable --peer.adr=server1,server2,server3:2181:/hbase  -libjars=/usr/data/hbase-client/lib/phoenix-4.9.0-HBase-1.1-server.jar --new.name=newTableName oldTableName

博客主页:https://www.jianshu.com/u/e97bb429f278

    原文作者:伍柒大人的三言两语
    原文地址: https://www.jianshu.com/p/35a8d2640122
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞