1. 安装
1.1下载页面会提供两种二进制包:
- zeppelin-0.7.3-bin-netinst.tgz 默认只会提供Spark的Interpreter
- zeppelin-0.7.3-bin-all.tgz 会提供各种各样的Interpreter(MySQL,ElasticSearch等等)
根据你的使用场景具体选择哪种二进制包.
1.2 解压缩
tar -zxvf zeppelin-0.7.3-bin-all.tgz -C opt/
更改conf目录中的zeppelin-site.xml.template zeppelin-en.sh.template 分别为zeppelin-site.xml zeppelin-en.sh zeppelin的默认端口为8080,假如有端口冲突,需要更改zeppelin-sitt.xml中的端口号
2. zeppelin-env.sh配置
export JAVA_HOME=/usr/java/jdk1.8.0_131
export SPARK_HOME=/opt/cloudera/parcels/CDH-5.13.3-1.cdh5.13.3.p0.2/lib/spark/bin
export HBASE_HOME=/opt/cloudera/parcels/CDH-5.13.3-1.cdh5.13.3.p0.2/lib/hbase
3. spark配置
将spark-* 、hadoop-lzo*.jar 等可能的依赖jar 拷入\zeppelin-0.6.2-bin-all\interpreter\spark\dep
[root@hadoop1 zeppelin-0.7.3-bin-all]# cp /opt/cloudera/parcels/CDH-5.13.3-1.cdh5.13.3.p0.2/jars/spark* interpreter/spark/dep/
4. Hive 配置
将hive*.jar等依赖jar拷入zeppelin-0.7.3-bin-all\interpreter\jdbc
[root@hadoop1 zeppelin-0.7.3-bin-all]# cp /opt/cloudera/parcels/CDH-5.13.3-1.cdh5.13.3.p0.2/lib/hive/lib/hive-* /opt/zeppelin-0.7.3-bin-all/interpreter/lib/
5. hbase配置
将hbase-site.xml拷入\zeppelin-0.7.3-bin-all\conf目录下,不能拷贝core-site、hdfs-site ;
将hbase的依赖jar包拷入\zeppelin-0.7.3-bin-all\interpreter\hbase目录下(删除原有的);
[root@hadoop1 zeppelin-0.7.3-bin-all]# cp /opt/cm-5.13.3/run/cloudera-scm-agent/process/377-hbase-MASTER/hbase-site.xml conf/
[root@hadoop1 zeppelin-0.7.3-bin-all]# cp /opt/cloudera/parcels/CDH-5.13.3-1.cdh5.13.3.p0.2/lib/hbase/hbase-* interpreter/hbase/