sqoop定时增量导入mysql数据到hdfs(hive)

  • 需求

有2张大的mysql表,量级分别是1亿和4.5亿(太大了,DBA的同学正在考虑分表),而且数据是增量的,需要写spark任务做处理,直接读取mysql有点吃力,想通过sqoop定时增量直接导入hive,然后spark sql再与hive交互,能避免mysql的很多瓶颈,研究好几天sqoop定时任务,使用的是sqoop1,版本是sqoop-1.4.6-cdh5.7.0。

1. 创建并执行sqoop job:

sqoop job -delete torderincrementjob //先删除之前的job

sqoop job --create torderincrementjob -- import \
 --connect jdbc:mysql://172.16.*.*:3306/*?useCursorFetch=true \
 --username *\
 --password-file /input/sqoop/pwd/109mysql.pwd \
 --target-dir /mysqldb/t_order \
 --table t_order \
 --fields-terminated-by "\t" \
 --lines-terminated-by "\n" \
 --null-string '\\N' \
 --null-non-string '\\N' \
 --incremental append \
 --check-column id \
 --last-value 1281 \
 -m 1

其中:
–password-file指定hdfs上存放的密码
–fields-terminated-by “\t” \ 指定列分隔符,即制表符
–lines-terminated-by “\n” \ 指定行分隔符,及换行符
–split-by id \ 指定分割的字段
–null-string ‘\N’ \ 指定string类型到hive里的值为NULL
–null-non-string ‘\N’ \ 指定非string类型到hive里的值为NULL
–incremental append
–check-column id
–last-value 1281
以上3个参数组合使用做增量
创建完成后,执行这个job:sqoop job -exec torderincrementjob会看到在日志里有如下select语句:
SELECT MIN(id), MAX(id) FROM t_order WHERE ( id >= ‘1281’ AND id < ‘100701508’ ),也就是last-value指定的id,sqoop会自己维护记录,下次再执行这个任务,起始id就是100701508,每次都是从上次执行的id到当前id的区间增量,这样就能通过crontab做定时任务,定时增量导入到hdfs
每次执行sqoop都会更新last-value的值,下次从新的值开始,以下是我从3次打印的日志里截取的:
Lower bound value: 1281
Upper bound value: 100701508
Lower bound value: 100701508
Upper bound value: 100703035
Lower bound value: 100703035
Upper bound value: 100704475
第一次执行完job后hdfs就有数据了,可以在命令行或者通过50070查看

2. 在hive中创建外部表关联HDFS上的数据:
CREATE external TABLE `t_order`(
 `id` bigint, 
 `serial` string, 
 `product_id` int, 
 `product_type` tinyint, 
 `product_name` string, 
 `quantity` double, 
 `buyer_id` bigint, 
 `payer_id` bigint, 
 `price` double, 
 `vip_price` double, 
 `settle_price` double, 
 `currency` string, 
 `payer_level` tinyint, 
 `status` tinyint, 
 `pay_mode` tinyint, 
 `payment_serial` string, 
 `client_type` string, 
 `app_type` tinyint, 
 `seller_id` string, 
 `partner_id` int, 
 `reference` string, 
 `channel_source` string, 
 `note` string, 
 `expiration_time` string, 
 `operator` string, 
 `create_time` string, 
 `pay_time` string, 
 `update_time` string)
ROW FORMAT DELIMITED 
 FIELDS TERMINATED BY '\t' 
 LINES TERMINATED BY '\n' 
LOCATION
 'hdfs://golive-master:8020/mysqldb/t_order'

这时候就可以通过hive查询hdfs上的数据了

select * from golivecms20.t_order limit 10;
3. crontab定时任务

创建如下3个文件:

timermysqltohdfs.cron //定时任务
timermysqltohdfs.sh //脚本文件
timermysqltohdfs.log //日志文件

timermysqltohdfs.sh:
#!/bin/sh
current_time=$(date +%Y%m%d%H%M%S)
echo $current_time >> /data/bigdata/app/sqoopjob/timermysqltohdfs.log
echo ............................>> /data/bigdata/app/sqoopjob/timermysqltohdfs.log
#t_order表同步
/data/bigdata/sqoop-1.4.6-cdh5.7.0/bin/sqoop job -exec torderincrementjob
#t_userlogout表同步
/data/bigdata/sqoop-1.4.6-cdh5.7.0/bin/sqoop job -exec tuserlogoutincrementjob

timermysqltohdfs.cron(每天1点、7点、13点、19点定时执行):
00 1,7,13,19  * * * /bin/bash /data/bigdata/app/sqoopjob/timermysqltohdfs.sh >> /data/bigdata/app/sqoopjob/timermysqltohdfs.log 2>&1

另一个表t_userlogout也是一样,相关命令如下:
创建job:

sqoop job --create tuserlogoutincrementjob -- import \
 --connect jdbc:mysql://172.16.*.*:3306/*?useCursorFetch=true \
 --username *\
 --password-file /input/sqoop/pwd/68mysql.pwd \
 --target-dir /mysqldb/t_userlogout \
 --table t_userlogout \
 --fields-terminated-by "\t" \
 --lines-terminated-by "\n" \
 --null-string '\\N' \
 --null-non-string '\\N' \
 --incremental append \
 --check-column ID \
 --last-value 1 \
 -m 1

首次执行:sqoop job -exec tuserlogoutincrementjob
在hive创建外部表:

CREATE external TABLE `t_userlogout`(
  `ID` bigint, 
  `GoliveId` string, 
  `InstalmentCode` string, 
  `ManufacturerCode` string, 
  `MacAddress` string, 
  `AreaCode` string, 
  `IpAddress` string, 
  `LoginTime` string, 
  `LogoutTime` string, 
  `DeviceID` string, 
  `VersionType` string, 
  `Version` string, 
  `Platform` string, 
  `PartnerID` int, 
  `BranchType` int, 
  `LicenseProviderCode` string)
ROW FORMAT DELIMITED 
  FIELDS TERMINATED BY '\t' 
  LINES TERMINATED BY '\n' 
LOCATION
  'hdfs://golive-master:8020/mysqldb/t_userlogout'

后边就是定时任务,增量导入了

  • 附录:

除了指定–table导入表的全部字段,也可以通过–query指定sql:
–query “select ID,GoliveId,InstalmentCode,ManufacturerCode,MacAddress,IpAddress,LoginTime,VersionType,Version,PartnerID from t_userlogout where $CONDITIONS” \

–query “select id,serial,product_id,product_type,product_name,buyer_id,price,vip_price,settle_price,status,pay_mode,client_type,seller_id,partner_id,channel_source,expiration_time,create_time,pay_time,update_time from t_order where $CONDITIONS” \

    原文作者:Sx_Ren
    原文地址: https://www.jianshu.com/p/f202ee34d1e3
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞