Hbase 之 某Region长期处于 RIT 状态 ( 空洞 )

速记:

Hbase web UI 发现某Region长期处于如下状态:

app_user_isnew,02,1517807389209.3eb41df715cdd0f9a2b0ce6550b586b3. state=PENDING_OPEN, ts=Wed Mar 14 21:22:10 CST 2018 (396447s ago), server=yq-hadoop184132,60020,1520836279511

Regions in Transition,没错,出现了RIT。
执行 hbase hbck 命令检查:

ERROR: Region { meta => app_user_isnew,02,1517807389209.3eb41df715cdd0f9a2b0ce6550b586b3., hdfs => hdfs://yq-hadoop19:8020/hbase/data/default/app_user_isnew/3eb41df715cdd0f9a2b0ce6550b586b3, deployed => , replicaId => 0 } not deployed on any region server.
18/03/19 11:40:08 INFO util.HBaseFsck: Handling overlap merges in parallel. set hbasefsck.overlap.merge.parallel to false to run serially.
ERROR: There is a hole in the region chain between 02 and 03.  You need to create a new .regioninfo and region dir in hdfs to plug the hole.
ERROR: Found inconsistency in table app_user_isnew

检查结果显示,该表存在一个空洞问题,我们定位到如下分区:

| app_user_isnew,02,1517807389209.3eb41df715cdd0f9a2b0ce6550b586b3. | yq-hadoop184140:60020| 02 | 03 |

检查Hbase元数据,元数据是存在的,如下:

hbase(main):053:0> get'hbase:meta','app_user_isnew,02,1517807389209.3eb41df715cdd0f9a2b0ce6550b586b3.'
COLUMN                                                    CELL                                                                                                                                                                  
 info:regioninfo                                          timestamp=1517807391090, value={ENCODED => 3eb41df715cdd0f9a2b0ce6550b586b3, NAME => 'app_user_isnew,02,1517807389209.3eb41df715cdd0f9a2b0ce6550b586b3.', STARTKEY => 
                                                          '02', ENDKEY => '03'}                                                                                                                                                 
 info:seqnumDuringOpen                                    timestamp=1520449077356, value=\x00\x00\x00\x00\x00\x00\x00\x13                                                                                                       
 info:server                                              timestamp=1520449077356, value=yq-hadoop184140:60020                                                                                                                  
 info:serverstartcode                                     timestamp=1520449077356, value=1520214106665                                                                                                                          
4 row(s) in 0.0070 seconds

检查HDFS目录文件,发现.regioninfo文件也是存在的,如下:

$ hdfs dfs -ls /hbase/data/default/app_user_isnew/3eb41df715cdd0f9a2b0ce6550b586b3
Found 3 items
-rw-r--r--   3 hbase hbase         53 2018-02-05 13:09 /hbase/data/default/app_user_isnew/3eb41df715cdd0f9a2b0ce6550b586b3/.regioninfo
drwxr-xr-x   - hbase hbase          0 2018-02-05 13:09 /hbase/data/default/app_user_isnew/3eb41df715cdd0f9a2b0ce6550b586b3/f1
drwxr-xr-x   - hbase hbase          0 2018-03-08 02:57 /hbase/data/default/app_user_isnew/3eb41df715cdd0f9a2b0ce6550b586b3/recovered.edits

既然元数据表和HDFS目录中都有,那应该是该region注册的问题,我们执行以下命令:

hbase hbck -fixAssignments
此命令用于修复未分配,错误分配或者多次分配Region的问题。

执行结果如下:

# hbase hbck -fixAssignments
18/03/19 14:14:19 INFO Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
HBaseFsck command line options: -fixAssignments
18/03/19 14:14:19 WARN util.HBaseFsck: Got AccessDeniedException when preCheckPermission 
org.apache.hadoop.hbase.security.AccessDeniedException: Permission denied: action=WRITE path=hdfs://yq-hadoop19:8020/hbase/.hbase-snapshot user=hdfs
    at org.apache.hadoop.hbase.util.FSUtils.checkAccess(FSUtils.java:1797)
    at org.apache.hadoop.hbase.util.HBaseFsck.preCheckPermission(HBaseFsck.java:1929)
    at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:4731)
    at org.apache.hadoop.hbase.util.HBaseFsck$HBaseFsckTool.run(HBaseFsck.java:4559)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
    at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:4547)
Current user hdfs does not have write perms to hdfs://yq-hadoop19:8020/hbase/.hbase-snapshot. Please rerun hbck as hdfs user hbase

错误显示我们应该使用hbase用户执行该命令,使用hbase用户执行。

sudo -u hbase hbase hbck -fixAssignments

执行后结果:
0 inconsistencies detected.
Status: OK

解决!

参考文献

1. hbase 修复 hbck

    原文作者:步闲
    原文地址: https://www.jianshu.com/p/d5697506741e
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞