一、在Spark的安装包下的conf下创建一个文件 hive-site.xml,不需要更新到其他的节点,只需要在客户端有一份hive-site.xml就可以
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://node1:3306/hive?createDatabaseIfNotExist=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>123456</value>
</property>
</configuration>
二、开启Hive的metaStore服务
hive --service metastore
三、启动spark-shell时指定mysql连接驱动位置
/usr/local/spark/bin/spark-shell \
--master spark://node1:7077 \
--executor-memory 512m \
--total-executor-cores 1 \
--driver-class-path /usr/local/hive/lib/mysql-connector-java-5.1.32-bin.jar
四、
import org.apache.spark.sql.hive.HiveContext
val hc = new HiveContext(sc)
hc.sql("show databases").show