导语
HDP2.4的Hadoop版本为2.7.1,Spark版本为Spark1.6。很多感兴趣的朋友想要在HDP2.4的环境上尝鲜Spark2.0,笔者自己也尝试着在HDP2.4的环境下运行了spark2.0 on YARN模式。将一些配置整理如下,感兴趣的朋友可以作为参考。
1. 环境准备
- 安装HDP2.4,不包含Spark组件。
- 下载Spark2.0预览版spark-2.0.0-preview-bin-hadoop2.7.tar
2. 配置
解压缩spark-2.0.0-preview-bin-hadoop2.7.tar 到目录/usr/hdp/2.4.0.0-169下。
进入Spark配置目录 /usr/hdp/2.4.0.0-169/spark-2.0.0-preview-bin-hadoop2.7/conf
配置hive-site.xml 为如下:
<configuration> <property> <name>hive.metastore.uris</name> <value>thrift://url:9083</value> </property> </configuration>
其中,thrift://url:9083为hive metastore url.
配置spark-env.sh为如下,相应的配置根据安装环境设置。
export SPARK_CONF_DIR=/usr/hdp/2.4.0.0-169/spark-2.0.0-preview-bin-hadoop2.7/conf # Where log files are stored.(Default:${SPARK_HOME}/logs) #export SPARK_LOG_DIR=${SPARK_HOME:-/usr/hdp/current/spark-historyserver}/logs export SPARK_LOG_DIR=/var/log/spark # Where the pid file is stored. (Default: /tmp) export SPARK_PID_DIR=/var/run/spark # The scheduling priority for daemons. (Default: 0) SPARK_NICENESS=0 export HADOOP_HOME=/usr/hdp/current/hadoop-client export HADOOP_CONF_DIR=/usr/hdp/current/hadoop-client/conf # The java implementation to use. export JAVA_HOME=/usr/jdk64/jdk1.8.0_60/ if [ -d "/etc/tez/conf/" ]; then export TEZ_CONF_DIR=/etc/tez/conf else export TEZ_CONF_DIR= fi
配置spark-defaults.conf为如下,相应的配置根据环境设置,spark.yarn.historyServer.address 为spark history server地址.
spark.eventLog.dir hdfs:///spark-history spark.eventLog.enabled true spark.history.fs.logDirectory hdfs:///spark-history spark.history.kerberos.keytab none spark.history.kerberos.principal none spark.history.provider org.apache.spark.deploy.history.FsHistoryProvider spark.history.ui.port 18080 spark.yarn.containerLauncherMaxThreads 25 spark.yarn.driver.memoryOverhead 384 spark.yarn.executor.memoryOverhead 384 spark.yarn.historyServer.address ochadoop02.jcloud.local:18080 spark.yarn.max.executor.failures 3 spark.yarn.preserve.staging.files false spark.yarn.queue default spark.yarn.scheduler.heartbeat.interval-ms 5000 spark.yarn.submit.file.replication 3
在hdfs上创建文件夹/spark-history,权限为执行spark的用户所有。
打开Ambari,在YARN配置里,diable yarn.timeline-service.enabled。这样的原因是在Spark2.0中Yarn依赖的jersey版本为1.9, Spark2.0依赖的Jersey版本为2.x。Spark没有对Jersey版本进行很好的适配。
打开Ambari,在MapReduce配置里,修改/usr/hdp/ ${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar为/usr/hdp/2.4.0.0-169/hadoop/lib/hadoop-lzo-0.6.0.2.4.0.0-169.jar