利用Docker搭建大数据处理集群(2)——集成HBase和zookeeper

前言

由于我们用爬虫框架采集了一些非结构化的数据,现在要存入HBase数据库,所以在原来的Spark集群上集成HBase数据库。
Spark集群的搭建参考我的上篇文章——利用Docker搭建大数据处理集群(1)——HDFS和Spark

一 前期准备

由于我们的Docker容器用的是默认桥接网络,所以同一主机的容器实例能够通信。
HBase的集群搭建需要重新配置原来的Docker镜像文件,集成HBase和zookeeper软件,重新编译。

软件环境

软件版本
CentosCentos 7.0
DockerDocker 17.03.1-ce
JDKjdk-8u101-linux-x64
Hadoophadoop-2.7.3
Spark2.0.1spark-2.0.1-bin-hadoop2.7
Scalascala-2.11.8
HBasehbase-1.2.5
Zookepeerzookeeper-3.4.10

集群设计

和原来的集群节点个数一致,三个容器分别代替三个节点,部署HBase集群

| 主机名 | IP地址 |安装软件|JPS启动进程|
| ——– | :—–: | :—-:| :—: | :—-: |
| master| 172.17.0.4| JDK/Scala/Spark/Zookeeper/Hadoop/HBase |Namenode/Resourcemanager/QuoqumPeerMain/Hmaster/Master/SecondaryNameNode |
| slave01| 172.17.0.5| JDK/Scala/Spark/Zookeeper/Hadoop/HBase |Datanode/Nodemanager/QuoqumPeerMain/Hregionserve/Worker|
| slave02| 172.17.0.6| JDK/Scala/Spark/Zookeeper/Hadoop/HBase/ | Datanode/Nodemanager/QuoqumPeerMain/Hregionserver/Worker|

端口分配

名称端口号
Hadoop Namenode Web UI50070
Hadoop Datanode Web UI50075
Hadoop SecondaryNamenode Web UI9001
Hadoop Namenode Web UI50070
HDFS9000
Yarn Web UI8088
Spark Web UI8091
HBase Web UI16010
Zookeeper2888 3888 4181

二 集群搭建

Docker 镜像文件配置

重新配置镜像文件,具体说明看代码注释如下:

# 构建Hadoop yarn spark Dockerfile文件内容
# Author:ywq

#基于centos7-ssh构建
FROM centos7-ssh

#配置各节点时间同步
RUN yum install -y ntp
#RUN systemctl is-enabled ntpd
#必须在run 容器时授权--privileged
#RUN systemctl enable ntpd
#RUN systemctl start ntpd
#docker容器与宿主机时区同步
RUN cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo "Asia/shanghai" > /etc/timezone

#创建spark账号
RUN useradd spark
RUN echo "spark:12345678" | chpasswd

#对于Hbase 修改 ulimit 限制
RUN echo "spark  -      nofile  32768 " >> /etc/security/limits.conf
RUN echo "spark  -      nproc   32000" >>  /etc/security/limits.conf
RUN echo "session required pam_limits.so" >>  /etc/pam.d/common-session 

#安装java
ADD jdk-8u101-linux-x64.tar.gz /usr/local/
RUN mv /usr/local/jdk1.8.0_101 /usr/local/jdk1.8

#配置JAVA环境变量
ENV JAVA_HOME /usr/local/jdk1.8
ENV PATH $JAVA_HOME/bin:$PATH


#安装hadoop
ADD hadoop-2.7.3.tar.gz /usr/local
RUN mv /usr/local/hadoop-2.7.3 /usr/local/hadoop

#配置hadoop环境变量
ENV HADOOP_HOME /usr/local/hadoop
ENV PATH $HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH

#安装scala 注意Spark2.0.1对于Scala的版本要求是2.11.x
ADD scala-2.11.8.tgz /usr/local
RUN mv /usr/local/scala-2.11.8 /usr/local/scala2.11.8

#配置scala环境变量
ENV SCALA_HOME /usr/local/scala2.11.8
ENV PATH $SCALA_HOME/bin:$PATH

#安装spark
ADD spark-2.0.1-bin-hadoop2.7.tgz /usr/local
RUN mv /usr/local/spark-2.0.1-bin-hadoop2.7 /usr/local/spark2.0.1

#配置spark环境变量
ENV SPARK_HOME /usr/local/spark2.0.1
ENV PATH $SPARK_HOME/bin:$PATH

#安装ZooKeeper
ADD zookeeper-3.4.10.tar.gz /usr/local
RUN mv /usr/local/zookeeper-3.4.10 /usr/local/zookeeper3.4.10

#配置ZooKeeper环境变量
ENV ZOOKEEPERE_HOME /usr/local/zookeeper3.4.10
ENV PATH $ZOOKEEPERE_HOME/bin:$PATH

#安装HBase
ADD hbase-1.2.5-bin.tar.gz /usr/local
RUN mv /usr/local/hbase-1.2.5 /usr/local/hbase1.2.5

#配置Hbase环境变量
ENV HBASE_HOME /usr/local/hbase1.2.5
ENV PATH $HBASE_HOME/bin:$PATH


# bigdata configurations hdfs hbase zookeeper spark so on
ADD conf/hdfs_conf/core-site.xml $HADOOP_HOME/etc/hadoop/core-site.xml
ADD conf/hdfs_conf/hdfs-site.xml $HADOOP_HOME/etc/hadoop/hdfs-site.xml
ADD conf/hdfs_conf/mapred-site.xml $HADOOP_HOME/etc/hadoop/mapred-site.xml
ADD conf/hdfs_conf/yarn-site.xml $HADOOP_HOME/etc/hadoop/yarn-site.xml
ADD conf/hdfs_conf/slaves $HADOOP_HOME/etc/hadoop/slaves

ADD conf/spark_conf/spark-env.sh $SPARK_HOME/conf/spark-env.sh
ADD conf/spark_conf/slaves $SPARK_HOME/conf/slaves

ADD conf/zookeeper_conf/zoo.cfg $ZOOKEEPERE_HOME/conf/zoo.cfg

ADD conf/hbase_conf/hbase-site.xml $HBASE_HOME/conf/hbase-site.xml
ADD conf/hbase_conf/regionservers $HBASE_HOME/conf/regionservers

RUN echo "export JAVA_HOME=/usr/local/jdk1.8" >> $HBASE_HOME/conf/hbase-env.sh
RUN echo "export HBASE_MANAGES_ZK=false" >> $HBASE_HOME/conf/hbase-env.sh

RUN echo "export JAVA_HOME=/usr/local/jdk1.8" >> $HADOOP_HOME/etc/hadoop/hadoop-env.sh


#更改hadoop和spark2.0.1目录所属用户
RUN chown -R spark:spark /usr/local/hadoop
RUN chown -R spark:spark /usr/local/spark2.0.1
RUN chown -R spark:spark /usr/local/hbase1.2.5
RUN chown -R spark:spark /usr/local/zookeeper3.4.10

RUN yum install -y which sudo

hadoop和spark的配置文件和原来一致,habse和zookpeer的配置文件如下:
hbase-env.sh配置:

export JAVA_HOME=/usr/local/jdk1.8
export HBASE_MANAGES_ZK=false

hbase-site.xml配置:

<configuration>
 <property>
        <name>hbase.cluster.distributed</name>
        <value>true</value>
    </property>
    <property>
        <name>hbase.rootdir</name>
        <value>hdfs://master:9000/hbase</value>
    </property>
    <property>
        <name>hbase.zookeeper.quorum</name>
        <value>master,slave01,slave02</value>
    </property>

</configuration>

zoo.cfg配置:

# The number of milliseconds of each tick
# 服务器与客户端之间交互的基本时间单元(ms)
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
# zookeeper所能接受的客户端数量 
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
# 服务器和客户端之间请求和应答之间的时间间隔
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
# 保存zookeeper数据,日志的路径
dataDir=/usr/local/zookeeper3.4.10/data
# the port at which the clients will connect
# 客户端与zookeeper相互交互的端口 
clientPort=4181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60

server.1= master:2888:3888 
server.2= slave01:2888:3888 
server.3= slave02:2888:3888

创建三个节点容器

#端口映射
docker run --privileged -d -P -p 50070:50070 -p 50075:50075 -p 8088:8088 -p 8091:8091 -p 16010:16010 -p 4181:4181 --name master -h master --add-host slave01:172.17.0.5 --add-host slave02:172.17.0.6 bigdata-cluster
docker run --privileged -d -P  --name slave01 -h slave01 --add-host master:172.17.0.4 --add-host slave02:172.17.0.6 bigdata-cluster
docker run --privileged -d -P  --name slave02 -h slave02 --add-host master:172.17.0.4 --add-host slave01:172.17.0.5  bigdata-cluster

启动容器终端配置ssh登录

#生成spark账号的key,执行后会有多个输入提示,不用输入任何内容,全部直接回车即可
ssh-keygen
#拷贝到其他节点
ssh-copy-id -i /home/spark/.ssh/id_rsa -p 22 spark@master
ssh-copy-id -i /home/spark/.ssh/id_rsa -p 22 spark@slave01
ssh-copy-id -i /home/spark/.ssh/id_rsa -p 22 spark@slave02
#验证是否设置成功
ssh slave01

配置zookpeer标识文件

  • 在master节点创建标识为1的myid
# mkdir -p /usr/local/zookeeper3.4.10/data
# echo "1" > /usr/local/zookeeper3.4.10/data/myid
  • 在slave01节点创建标识为2的myid
# mkdir -p /usr/local/zookeeper3.4.10/data
# echo "2" > /usr/local/zookeeper3.4.10/data/myid
  • 在slave02节点创建标识为3的myid
# mkdir -p /usr/local/zookeeper3.4.10/data
# echo "3" > /usr/local/zookeeper3.4.10/data/myid

三 启动集群

每个容器下都要启动zookpeer

zkServer.sh start

在master上启动HDFS,Yarn,HBase集群:

# 首次启动Hdfs,需要格式化
hdfs namenode -format
start-dfs.sh
#启动yarn
start-yarn.sh
#启动hbase
start-hbase.sh
#启动spark
usr/local/spark2.0.1/sbin/start-all.sh

master 查看进程

jps
608 SecondaryNameNode
1586 Jps
803 ResourceManager
405 NameNode
1496 Master
2063 HMaster
1114 QuorumPeerMain

slave01查看进程

jps
288 DataNode
865 Jps
418 NodeManager
551 QuorumPeerMain
1203 HRegionServer
791 Worker

报错问题

  • 启动Hbase,无法访问web页面,查看日志文件:
[main-SendThread(master:2181)] zookeeper.ClientCnxn: Opening socket connection to server master/172.17.0.5:2181. Will not attempt to authenticate using SASL (unknown error)
2017-05-18 15:26:41,006 WARN  [main-SendThread(master:2181)] zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect

解决方案:
由于改了zookpeer client的端口(默认是2181)。

总结

  • 现有的方式,每个容器都要启动,而且IP地址不固定,接下来想用Docker Machine,Docker Compose、Docker Swarm这三个解决节点之间的网络通信问题。
  • 另外想集成Ambari(集群管理监控工具)。
  • ZooKeeper 集群节点大小个数(一般是奇数)与分布式系统中的节点个数没什么关系,可以独立搭建。一般情况用一个相对较少(比如说,五个)节点的 ZooKeeper 集合体运行有 50 个节点的大型 HBase 集群。

参考文章

[1] Hadoop及HBase集群部署
[2] Docker实战(二十七)Docker容器之间的通信

    原文作者:IIGEOywq
    原文地址: https://www.jianshu.com/p/a1524dccb1e4
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞