通过hive shell操作phoenix下的hbase表

环境配置

hive-2.3.2
hbase-1.4.2
phoenix-4.14.0-HBase-1.4
spark-2.3.0-bin-hadoop2.7

想通过phoenix来统一hbase表的操作,所以首先在phoenix的shell上创建测试表并添加一些数据

CREATE TABLE TEST( ID VARCHAR not null, TEXT VARCHAR CONSTRAINT PK PRIMARY KEY (ID) ) ;
upsert into TEST values('1001','test-text1');
upsert into TEST values('1002','test-text2');

查询显示数据如下:

0: jdbc:phoenix:master> select * from TEST;
+-------+-------------+
|  ID   |    TEXT     |
+-------+-------------+
| 1001  | test-text1  |
| 1002  | test-text2  |
+-------+-------------+
2 rows selected (0.021 seconds)

此时hbase表里的数据是存在的,因为有需求在hive中来操作hbase表中的数据,所以直接想法是在hive中创建一张外部表来映射到hbase对应的表上,建表语句如下

CREATE EXTERNAL TABLE TEST(ID string, TEXT string) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,0:TEXT") TBLPROPERTIES ("hbase.table.name" = "TEST");

创建后,执行查询语句但未查询到数据

hive> show create table test;
OK
CREATE EXTERNAL TABLE `test`(
  `id` int COMMENT '', 
  `text` string COMMENT '')
ROW FORMAT SERDE 
  'org.apache.hadoop.hive.hbase.HBaseSerDe' 
STORED BY 
  'org.apache.hadoop.hive.hbase.HBaseStorageHandler' 
WITH SERDEPROPERTIES ( 
  'hbase.columns.mapping'=':key,0:TEXT', 
  'serialization.format'='1')
TBLPROPERTIES (
  'hbase.table.name'='TEST', 
  'transient_lastDdlTime'='1535441141')
Time taken: 0.178 seconds, Fetched: 13 row(s)
hive> select * from test;
OK
Time taken: 1.643 seconds
hive> exit;

在hbase shell中查看存储的数据如下

hbase(main):001:0> scan 'TEST'
ROW  COLUMN+CELL
 1001 column=0:\x00\x00\x00\x00, timestamp=1535528809205, value=x
 1001 column=0:\x80\x0B, timestamp=1535528809205, value=test-text1
 1002 column=0:\x00\x00\x00\x00, timestamp=1535528816321, value=x
 1002 column=0:\x80\x0B, timestamp=1535528816321, value=test-text2
2 row(s) in 0.2990 seconds

可以看到表项的列名(qualifier)都为字节数组,这应该是phoenix进行的转化,于是手动在hbase shell中put一条记录

hbase(main):001:0> put 'TEST','1003','0:TEXT','test-text3'
0 row(s) in 0.2900 seconds

hbase(main):002:0> scan 'TEST'
ROW  COLUMN+CELL
 1001 column=0:\x00\x00\x00\x00, timestamp=1535528809205, value=x
 1001 column=0:\x80\x0B, timestamp=1535528809205, value=test-text1
 1002 column=0:\x00\x00\x00\x00, timestamp=1535528816321, value=x
 1002 column=0:\x80\x0B, timestamp=1535528816321, value=test-text2
 1003 column=0:TEXT, timestamp=1535529809368, value=test-text3
3 row(s) in 0.0420 seconds

在hive的shell下查询记录为:

hive> select * from test;
OK
1003    test-text3
Time taken: 1.764 seconds, Fetched: 1 row(s)

所以应该是需要hive读取phoenix处理过的表的相关的lib
参考https://phoenix.apache.org/hive_storage_handler.html中的描述
在hive-env.sh增加一个环境变量,如下

export HIVE_AUX_JARS_PATH=/opt/hive-aux

/opt/hive-aux中存放的是phoenix相关的lib,选择两个使用到的jar包放入

[root@a9ae631a3a34 hive-aux]# ll
total 183776
-rw-r--r-- 1 root root 107956190 Aug 29 05:51 phoenix-4.14.0-HBase-1.4-client.jar
-rw-r--r-- 1 root root  80217117 Aug 29 04:28 phoenix-4.14.0-HBase-1.4-hive.jar

在hive shell中重新创建外部表,测试语句如下:

> drop table test;
OK
Time taken: 0.193 seconds
hive> create external table TEST (
    > id string,
    > text string
    > )
    > STORED BY 'org.apache.phoenix.hive.PhoenixStorageHandler'
    > TBLPROPERTIES (
    > "phoenix.table.name" = "TEST",
    > "phoenix.zookeeper.quorum" = "master",
    > "phoenix.zookeeper.znode.parent" = "/hbase",
    > "phoenix.zookeeper.client.port" = "2181",
    > "phoenix.rowkeys" = "id",
    > "phoenix.column.mapping" = "id:ID,text:TEXT"
    > );
OK
Time taken: 2.763 seconds
hive> select * from test;
OK
1001    test-text1
1002    test-text2
Time taken: 1.486 seconds, Fetched: 2 row(s)

这个数据是和phoenix接口中的数据是一致的,但是后面收到在hbase中put的一条数据并没查询到,这应该也验证了不同的接口向hbase中写入的数据是有差异的,目前看到的差异是
1.phoenix api中写入的列名转换为byte array
2.hbase shell中put写入的列名是按照string处理的
3.hive处理hbase的列名是按照string处理的

下面我想通过spark sql来操作刚才在hbase中创建的TEST表项,即环境描述为
hbase>phoenix>hive>spark-sql这样的关系
使用如下命令启动spark-sql shell

/opt/spark-2.3.0-bin-hadoop2.7/bin/spark-sql --master=spark://master:7077 --total-executor-cores=2 --executor-memory=512m --jars=/opt/hive-aux/*

查询结果如下

         > select * from testdb.test;
18/08/29 08:30:31 WARN RpcControllerFactory: Cannot load configured "hbase.rpc.controllerfactory.class" (org.apache.hadoop.hbase.ipc.controller.ClientRpcControllerFactory) from hbase-site.xml, falling back to use default RpcControllerFactory
1001    test-text1
1002    test-text2
Time taken: 11.923 seconds, Fetched 2 row(s)

这个结果是与hive shell中的查询结果是一致的
注意:

不要把phoenix-4.14.0-HBase-1.4-client.jar,phoenix-4.14.0-HBase-1.4-hive.jar这两个lib直接拷贝到$SPARK_HOME/jars下面,否则会有一些lib冲突导致的java.lang.NoSuchMethodError异常

补充:
如果hbase表使用多个字段作为主键的表,映射关系可参考如下建表语句

#phoenix中建表语句
CREATE TABLE TEST_COMBPK(
ID1 VARCHAR NOT NULL,
ID2 VARCHAR NOT NULL,
NUM DOUBLE,
TEXT VARCHAR
CONSTRAINT PK PRIMARY KEY (ID1, ID2)
);

#hive中建表语句
create external table TEST_COMBPK (
id1 string,
id2 string,
num double,
text string
)
STORED BY 'org.apache.phoenix.hive.PhoenixStorageHandler'
TBLPROPERTIES (
"phoenix.table.name" = "TEST_COMBPK",
"phoenix.zookeeper.quorum" = "master",
"phoenix.zookeeper.znode.parent" = "/hbase",
"phoenix.zookeeper.client.port" = "2181",
"phoenix.rowkeys" = "id1,id2",
"phoenix.column.mapping" = "id1:ID1,id2:ID2,num:NUM,text:TEXT"
);

#不使用phoenix的外部表映射,这种方式并不能读取从phoenix api写入的数据
CREATE EXTERNAL TABLE TEST_COMBPK(ID STRUCT<ID1:STRING, ID2:STRING>, NUM DOUBLE, TEXT STRING)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,0:NUM,0:TEXT"")
TBLPROPERTIES ("hbase.table.name" = "TEST_COMBPK");
    原文作者:soycici
    原文地址: https://www.jianshu.com/p/09c30d2074d6
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞