Hive的一些复杂操作

一、Json文件解析

案例:rating.json文件

{“movie”:”1193″,”rate”:”5″,”timeStamp”:”978300760″,”uid”:”1″}

{“movie”:”661″,”rate”:”3″,”timeStamp”:”978302109″,”uid”:”1″}

{“movie”:”914″,”rate”:”3″,”timeStamp”:”978301968″,”uid”:”1″}

{“movie”:”3408″,”rate”:”4″,”timeStamp”:”978300275″,”uid”:”1″}

{“movie”:”2355″,”rate”:”5″,”timeStamp”:”978824291″,”uid”:”1″}

{“movie”:”1197″,”rate”:”3″,”timeStamp”:”978302268″,”uid”:”1″}

{“movie”:”1287″,”rate”:”5″,”timeStamp”:”978302039″,”uid”:”1″}

1、求出每个人评分最高的3部电影

2、求出被评分次数最多的3部电影

用hive求解过程

step1:上传文件到hdfs

在sftp窗口直接拖动文件到/home/hadoop/soft

step2:创建一张表rate_json:字段jsonline,加载json文件数据

create external table  if not exists rate_json(jsonline string) ;

load data local inpath ‘/home/hadoop/soft/rating.json’ into table rate_json;

step3:创建一张表rate:movieId int,rate int,timeStamp bigint,uid int

create external table  if not exists rate(movieId int,rate int,ts bigint,uid int) row format delimited fields terminated by ‘\t’;

报错:FAILED: ParseException line 1:63 Failed to recognize predicate ‘timeStamp’. Failed rule: ‘identifier’ in column specification

原因:timeStamp为关键字

step4:解析json对象属性为对应表字段,注$后字段对应json对象属性字段

insert into table rate select get_json_object(jsonline,’$.movie’) as movie,get_json_object(jsonline,’$.rate’) as rate,get_json_object(jsonline,’$.timeStamp’) as ts,get_json_object(jsonline,’$.uid’) as uid from rate_json;

step5:查询5条 

select * from Rate limit 5;

《Hive的一些复杂操作》 表rate中数据

二、清洗数据三种方式

1、创建表的时候使用<正则正反序列化>替换特殊分隔符为单字节分隔符

row format serde ‘org.apache.hadoop.hive.serde2.RegexSerDe’

with serdeproperties(‘input.regex’='(.*)::(.*)::(.*)’,’output.format.string’=’%1$s %2$s %3$s’);

2、写shell脚本replace.sh利用文本处理命令替换双字节分隔符为单字节分隔符

/home/hadoop/hive/shell/replace.sh

#!/bin/bash

sed -i “s/::/:/g” `grep :: -rl /home/hadoop/hive/ratings.dat`

sed -i “s/::/:/g” `grep :: -rl /home/hadoop/hive/movies.dat`

sed -i “s/::/:/g” `grep :: -rl /home/hadoop/hive/users.dat`

3、自定义InputFormat组件

项目:InputFormat_UDF

类名:

cn.serenity.format.MyInputFormat

cn.serenity.format.BiRecordReader

//这里是我们修改的地方:

//加载数据的时候(也就是使用load data [local] inpath)会将所有出现||字符的替换为|

//因此创建表的时候我们制定的是第二个参数’|’

String str = value.toString().replaceAll(“\\|\\|”, “\\|”);

value.set(str);

使用:项目导出为inputformat.jar

上传至/home/hadoop/soft/

转移到/home/hadoop/apps/apache-hive-bin-1.2.1/lib目录下

重启hive

在创建movie表时指定

stored as inputformat ‘cn.serenity.format.MyInputFormat’ outputformat

‘org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat’;

三、用户自定义函数GenericUDTF

案例:影评练习 

(8)该影评库中各种类型电影中评价最高的5部电影(类型,电影名,平均影评分)

1、使用UDTF

注意代码中接收到的属性为:

fieldNames.add(“movieid”);

fieldNames.add(“movietitle”);

fieldNames.add(“movietype”);

String[] fields = string.split(“:”);

表中字段和分隔符必须与代码中的字段名和分隔符名一致

create table if not exists small_movie(movieid int,movietitle string,movietype string) row format delimited fields terminated by ‘:’;

load data local inpath ‘/home/hadoop/hive/small_movie’ into table small_movie;

数据样式    1::Toy Story (1995)::Animation|Children’s|Comedy

该需求需要我们将电影类型拆分成n行,就上面这条记录来说,需要拆分成3行

step1:上传movies.dat文件以及jar包到到/home/hadoop/soft,

mv /home/hadoop/soft/movies.dat /home/hadoop/hive/

mv /home/hadoop/soft/InputFormat_UDF.jar /home/hadoop/udf/

并且在hive窗口下使用命令将jar包添加到class path下

hive> add jar /home/hadoop/udf/InputFormat_UDF.jar;

结果:

Added [/home/hadoop/udf/InputFormat_UDF.jar] to class path

Added resources: [/home/hadoop/udf/InputFormat_UDF.jar]

step2:调用hive/replace.sh清洗数据

sh /home/hadoop/hive/replace.sh

step3:hive窗口下创建表,注意分隔字段要个清洗数据时指定的替换字符一致

create table if not exists movie(movieid int,movietitle string,movietype string) row format delimited fields terminated by ‘:’;

step4:加载文件movies.dat数据到表movie中

load data local inpath ‘/home/hadoop/hive/movies.dat’ into table movie;

step5:创建临时函数关联该jar

create temporary function movie_type_split as ‘cn.serenity.udtf.MovieTypeUDTF’;

step5:使用

2、使用lateral view explode

create table if not exists small_movie(movieId int,moviename string,movietype string) row format delimited fields terminated by ‘:’;

load data local inpath ‘/home/hadoop/hive/small_movie’ into table small_movie;

SELECT  movieId,moviename,movie2.type FROM small_movie  a LATERAL VIEW explode(split(a.movietype, ‘\\|’)) movie2 AS type;

四、复杂类型的json解析

+——————————————————————-+

                                       json

+——————————————————————-+

{“store”:

{“fruit”:[{“weight”:8,”type”:”apple”},{“weight”:9,”type”:”pear”}],

“bicycle”:{“price”:19.95,”color”:”red”}

},

“email”:”amy@only_for_json_udf_test.net”,

“owner”:”amy”

}

+——————————————————————-+

json内容可以用以下查询语句解析

hive> SELECT get_json_object(src_json.json, ‘$.owner’) FROM src_json;

amy

hive> SELECT get_json_object(src_json.json, ‘$.store.fruit[0]’) FROM src_json;

{“weight”:8,”type”:”apple”}

hive> SELECT get_json_object(src_json.json, ‘$.non_exist_key’) FROM src_json;

NULL

    原文作者:夙夜M
    原文地址: https://www.jianshu.com/p/78a60ced7eb8
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞