HIVE UDTF 自定义函数

HIVE UDTF 自定义函数

关键词:HIVE UDTF 开发 实例
Hive运行用户自定义函数对数据信息处理,可以试用show functions查看 hive当前支持的函数,查看凡是如下

hive> show functions
    > ;
OK
!
!=
%
&
*
+
-
/

hive支持三种类型的UDF函数:

  • 普通UDF函数
    操作单个数据行,且产生一个数据作为输出。例如(数学函数,字符串函数)
  • 聚合udf (UDAF)
    接受多个数据行,并产生一个数据行作为输出。例如(COUNT,MAX函数等)
  • 表生成UDF(UDTF)
    接受一个数据行,然后返回产生多个数据行(一个表作为输出)

UDTF自定义函数的实现:

编码实现:
UDTF函数的实现必须通过继承抽象类GenericUDTF,并且要实现initialize, process,close 函数。

  • initialize实现如下:
package com.jd.risk.hive.UDTF;

import org.apache.hadoop.hive.ql.exec.UDFArgumentException;
import org.apache.hadoop.hive.ql.udf.generic.GenericUDTF;
import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;
import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorFactory;
import org.apache.hadoop.hive.serde2.objectinspector.PrimitiveObjectInspector;
import org.apache.hadoop.hive.serde2.objectinspector.StructObjectInspector;
import org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory;

import java.util.ArrayList;
import java.util.List;
public class FeatureParseUDTF extends GenericUDTF {

    private PrimitiveObjectInspector stringOI = null;

    @Override
    public StructObjectInspector initialize(ObjectInspector[] objectInspectors) throws UDFArgumentException {

        // 异常检测
        if (objectInspectors.length != 1) {
            throw new UDFArgumentException("NameParserGenericUDTF() takes exactly one argument");
        }

        if(objectInspectors[0].getCategory()!=ObjectInspector.Category.PRIMITIVE&&((PrimitiveObjectInspector) objectInspectors[0]).getPrimitiveCategory() != PrimitiveObjectInspector.PrimitiveCategory.STRING) {
            throw new UDFArgumentException("NameParserGenericUDTF() takes a string as a parameter");
        }

        //输入
        stringOI = (PrimitiveObjectInspector) objectInspectors[0];

        // 输出
        List<String> fieldNames = new ArrayList<String>(2);
        List<ObjectInspector> fieldOIs = new ArrayList<ObjectInspector>(2);

        // 输出列名
        fieldNames.add("name");
        fieldNames.add("value");
        fieldOIs.add(PrimitiveObjectInspectorFactory.javaStringObjectInspector);
        fieldOIs.add(PrimitiveObjectInspectorFactory.javaStringObjectInspector);
        return ObjectInspectorFactory.getStandardStructObjectInspector(fieldNames, fieldOIs);
    }
}

hive将通过initialize方法来获取UDTF函数要求的参数类型然后返回与UDTF函数输出行对象相应的Inspector。initialize使用PrimitiveObjectInspector来解析输入的数据,同时定义输出对象Inspector所需要的field。

  • process实现如下:
    @Override
    public void process(Object[] record) throws HiveException {
    
        final String feature = stringOI.getPrimitiveJavaObject(record[0]).toString();
        ArrayList<Object[]> results = parseInputRecord(feature);
        Iterator<Object[]> it = results.iterator();
        while (it.hasNext()){
            Object[] r= it.next();
            forward(r);
        }
    }
    /**
     * 解析函数,将json格式字符格式化成多行数据
     * @param feature
     * @return
     */
    public ArrayList<Object[]> parseInputRecord(String feature){
        ArrayList<Object[]> resultList = null;
        try {
            JSONObject json = JSON.parseObject(feature);
            resultList = new ArrayList<Object[]>();
            for (String nameSpace : json.keySet()) {
                JSONObject dimensionJson = json.getJSONObject(nameSpace);
                for (String dimensionName : dimensionJson.keySet()) {
                    JSONObject featureJson = dimensionJson.getJSONObject(dimensionName);
                    for (String featureName : featureJson.keySet()) {
                        String property_name = nameSpace + ":" + dimensionName + ":" + featureName;
                        Object[] item = new Object[2];
                        item[0] = property_name;
                        item[1] = featureJson.get(featureName);
                        resultList.add(item);
                    }
                }
            }
        } catch (Exception e) {
            e.printStackTrace();
        }
        return resultList;
    }

process函数实现具体的数据解析过程,在通过stringIO获取输入字段,程序中使用parseInputRecord方法将json字符串解析成多个字符,将返回一个List完成一行转多行的任务。最后forward将多行数据做udtf函数的输出。

  • close实现如下:
    @Override
    public void close() throws HiveException {

  }
  • maven 依赖:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.jd.udf</groupId>
    <artifactId>featureParse</artifactId>
    <version>1.0-SNAPSHOT</version>


    <dependencies>

        <dependency>
            <groupId>org.apache.hive</groupId>
            <artifactId>hive-exec</artifactId>
            <version>0.12.0</version>
            <scope>provided</scope>
        </dependency>
        <!-- JSON -->
        <dependency>
            <groupId>com.alibaba</groupId>
            <artifactId>fastjson</artifactId>
            <version>1.1.31</version>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <artifactId>maven-assembly-plugin</artifactId>
                <configuration>
                    <archive>
                        <manifest>
                            <mainClass>com.allen.capturewebdata.Main</mainClass>
                        </manifest>
                    </archive>
                    <descriptorRefs>
                        <descriptorRef>jar-with-dependencies</descriptorRef>
                    </descriptorRefs>
                </configuration>
            </plugin>
        </plugins>
    </build>

</project>

打包命令:

mvn assembly:assembly

UDTF使用方式:
未使用UDTF函数:

hive> select features from table1 where dt = '2017-07-18'
OK
{"rcm": {"ordering_date": {"feature1": "0","feature2": "1","feature3": "2"}}}
Time taken: 505.014 seconds, Fetched: 1 row(s)
hive>   

使用UDTF函数:

hive> select featureParseUDTF(features)from table1 where dt = '2017-07-18'
OK
rcm:ordering_date:feature3 2
rcm:ordering_date:feature2 1
rcm:ordering_date:feature1 0
Time taken: 505.014 seconds, Fetched: 3 row(s)
hive>   

加载featureParseUDTF方法:

hive> add jar  /home/udtf/featureParse-1.0-SNAPSHOT-jar-with-dependencies.jar
    > ;
Added [/home/udtf/featureParse-1.0-SNAPSHOT-jar-with-dependencies.jar] to class path
Added resources: [/home/udtf/featureParse-1.0-SNAPSHOT-jar-with-dependencies.jar]
hive>  Create  temporary function featureParseUDTF as 'com.jd.risk.hive.UDTF.FeatureParseUDTF';
OK
Time taken: 0.024 seconds
hive> select featureParseUDTF(features)from table1 where dt = '2017-07-18'
OK
rcm:ordering_date:feature3 2
rcm:ordering_date:feature2 1
rcm:ordering_date:feature1 0
Time taken: 505.014 seconds, Fetched: 3 row(s)

参考文献

1、http://beekeeperdata.com/posts/hadoop/2015/07/26/Hive-UDTF-Tutorial.html
2、https://acadgild.com/blog/hive-udtf/
3、http://db3.iteye.com/blog/1072778

    原文作者:认真的等待觉醒
    原文地址: https://www.jianshu.com/p/ac352ceab9cd
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞