spark读取hive

spark2.0+ 使用Sparksession替代HiveContext

1.添加MAVEN依赖

<!-- https://mvnrepository.com/artifact/mysql/mysql-connector-java -->
       <dependency>
           <groupId>mysql</groupId>
           <artifactId>mysql-connector-java</artifactId>
           <version>5.1.35</version>
       </dependency>

       <dependency>
           <groupId>org.apache.spark</groupId>
           <artifactId>spark-hive_2.11</artifactId>
           <version>2.1.1</version>
       </dependency>

       <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-hive -->
       <dependency>
           <groupId>org.apache.spark</groupId>
           <artifactId>spark-sql_2.11</artifactId>
           <version>2.1.1</version>
       </dependency>


       <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-core -->
       <dependency>
           <groupId>org.apache.spark</groupId>
           <artifactId>spark-core_2.11</artifactId>
           <version>2.1.1</version>
       </dependency>

2.编写SparkSession

package com.hualala.bi

import java.io.File

import org.apache.spark.sql.SparkSession

/**
  *
  * @author jiaquanyu 
  *
  */
object SparkSqlApp {

  def main(args: Array[String]): Unit = {

    case class Record(key: Int, value: String)

    val warehouseLocation = new File("spark-warehouse").getAbsolutePath

    val spark = SparkSession
      .builder()
      .appName("Spark SQL on hive")
      .master("spark://192.168.4.4:7077")
      .config("spark.sql.warehouse.dir", warehouseLocation)
      .enableHiveSupport()
      .getOrCreate()

    spark.sql("show databases").collect().foreach(println)
  }
}

3.将hive-site.xml保存到项目中的resources下 没有需创建

注:如需debug 建议spark-shell下进行

    原文作者:無敵兔八哥
    原文地址: https://www.jianshu.com/p/6b5121039e1f
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞