本文主要记录build 支持hdfs的docker过程中遇到的问题,以及解决方法。
自己写的Dockerfile文件,可以参考学习下:
# Creates pseudo distributed hadoop 2.7.1
#
# docker build -t sequenceiq/hadoop .
FROM localhost:5000/my-centos
MAINTAINER xzp
USER root
# install dev tools
RUN yum clean all && yum update -y
RUN rpm --rebuilddb
RUN yum install -y curl which tar sudo openssh-server openssh-clients rsync wget
# update libselinux. see https://github.com/sequenceiq/hadoop-docker/issues/14
RUN yum update -y libselinux
# passwordless ssh
RUN ssh-keygen -q -N "" -t dsa -f /etc/ssh/ssh_host_dsa_key
RUN ssh-keygen -q -N "" -t rsa -f /etc/ssh/ssh_host_rsa_key
RUN ssh-keygen -q -N "" -t rsa -f /root/.ssh/id_rsa
RUN cp /root/.ssh/id_rsa.pub /root/.ssh/authorized_keys
# java
#因为实时下载太慢了,这里取了巧,将本地的文件放上去
#RUN curl -LO 'http://download.oracle.com/otn-pub/java/jdk/7u71-b14/jdk-7u71-linux-x64.rpm' -H 'Cookie: oraclelicense=accept-securebackup-cookie'
ADD jdk-7u71-linux-x64.rpm /root/
RUN chmod +x /root/jdk-7u71-linux-x64.rpm
RUN rpm -i /root/jdk-7u71-linux-x64.rpm
RUN rm /root/jdk-7u71-linux-x64.rpm
ENV JAVA_HOME /usr/java/default
ENV PATH $PATH:$JAVA_HOME/bin
RUN rm /usr/bin/java && ln -s $JAVA_HOME/bin/java /usr/bin/java
# hadoop
#同上,直接放本地文件
#RUN curl -s http://www.eu.apache.org/dist/hadoop/common/hadoop-2.7.1/hadoop-2.7.1.tar.gz | tar -xz -C /usr/local/
COPY hadoop-2.7.2.tar.gz /root/
RUN tar -xzvf /root/hadoop-2.7.2.tar.gz -C /usr/local
RUN cd /usr/local && ln -s ./hadoop-2.7.2 hadoop
ENV HADOOP_PREFIX /usr/local/hadoop
ENV HADOOP_COMMON_HOME /usr/local/hadoop
ENV HADOOP_HDFS_HOME /usr/local/hadoop
ENV HADOOP_MAPRED_HOME /usr/local/hadoop
ENV HADOOP_YARN_HOME /usr/local/hadoop
ENV HADOOP_CONF_DIR /usr/local/hadoop/etc/hadoop
ENV YARN_CONF_DIR $HADOOP_PREFIX/etc/hadoop
RUN mkdir $HADOOP_PREFIX/input
RUN cp $HADOOP_PREFIX/etc/hadoop/*.xml $HADOOP_PREFIX/input
RUN sed -i '/^export JAVA_HOME/ s:.*:export JAVA_HOME=/usr/java/default\nexport HADOOP_PREFIX=/usr/local/hadoop\nexport HADOOP_HOME=/usr/local/hadoop\n:' $HADOOP_PREFIX/etc/hadoop/hadoop-env.sh
RUN sed -i '/^export HADOOP_CONF_DIR/ s:.*:export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop/:' $HADOOP_PREFIX/etc/hadoop/hadoop-env.sh
# pseudo distributed
ADD core-site.xml.template $HADOOP_PREFIX/etc/hadoop/core-site.xml.template
RUN sed s/HOSTNAME/localhost/ /usr/local/hadoop/etc/hadoop/core-site.xml.template > /usr/local/hadoop/etc/hadoop/core-site.xml
ADD hdfs-site.xml $HADOOP_PREFIX/etc/hadoop/hdfs-site.xml
ADD mapred-site.xml $HADOOP_PREFIX/etc/hadoop/mapred-site.xml
ADD yarn-site.xml $HADOOP_PREFIX/etc/hadoop/yarn-site.xml
RUN $HADOOP_PREFIX/bin/hdfs namenode -format
# fixing the libhadoop.so like a boss
#RUN rm -rf /usr/local/hadoop/lib/native
#RUN mv /tmp/native /usr/local/hadoop/lib
ADD ssh_config /root/.ssh/config
RUN chmod 600 /root/.ssh/config
RUN chown root:root /root/.ssh/config
# ADD supervisord.conf /etc/supervisord.conf
ADD bootstrap.sh /etc/bootstrap.sh
RUN chown root:root /etc/bootstrap.sh
RUN chmod 700 /etc/bootstrap.sh
ENV BOOTSTRAP /etc/bootstrap.sh
# workingaround docker.io build error
RUN ls -la /usr/local/hadoop/etc/hadoop/*-env.sh
RUN chmod +x /usr/local/hadoop/etc/hadoop/*-env.sh
RUN ls -la /usr/local/hadoop/etc/hadoop/*-env.sh
# fix the 254 error code
RUN sed -i "/^[^#]*UsePAM/ s/.*/#&/" /etc/ssh/sshd_config
RUN echo "UsePAM no" >> /etc/ssh/sshd_config
RUN echo "Port 2122" >> /etc/ssh/sshd_config
RUN mkdir -p /root/data
RUN mkdir -p /root/name
#RUN . $HADOOP_PREFIX/etc/hadoop/hadoop-env.sh
#RUN $HADOOP_PREFIX/etc/hadoop/hadoop-env.sh && $HADOOP_PREFIX/sbin/start-dfs.sh && $HADOOP_PREFIX/bin/hdfs dfs -mkdir -p /user/root
#RUN $HADOOP_PREFIX/etc/hadoop/hadoop-env.sh && $HADOOP_PREFIX/sbin/start-dfs.sh && $HADOOP_PREFIX/bin/hdfs dfs -put $HADOOP_PREFIX/etc/hadoop/ input
#CMD ["/etc/bootstrap.sh", "-d"]
#CMD ["/usr/sbin/sshd","-d"]
#export ports
# Hdfs ports
EXPOSE 50010 50020 50070 50075 50090 8020 9000
# Mapred ports
EXPOSE 10020 19888
#Yarn ports
EXPOSE 8030 8031 8032 8033 8040 8042 8088
#Other ports
EXPOSE 49707 2122
执行docker build -t docker-hadoop:2.7.0 . 后,查看images:
[root@nsfocus hadoop-docker-test]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hadoop-docker 2.7.0 c1f7c8db54bf 43 minutes ago 1.42 GB
test latest 9fb729a6501a About an hour ago 761.2 MB
localhost:5000/dpdk-centos 1.0.1 d8dffc0f4791 4 hours ago 312.7 MB
[root@nsfocus hadoop-docker-test]# docker run -it hadoop-docker:2.7.0 /etc/bootstrap.sh -bash
[root@352cf3c2d73f /]# jps
666 Jps
260 DataNode
447 SecondaryNameNode
126 NameNode
ok, 大功搞成。
遇到的问题解决记录:
- docker内service命令not found
解决:RUN yum -y install initscripts - Call From e45cc3c2b295/172.17.0.6 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused;
解决:
sshd未启动,需要启动: RUN /usr/sbin/sshd -D & ,
sshd的启动时机要放在hdfs node启动前;但是新版本的centos-docker不支持将sshd以systemd service启动,所以最好是最后一步启动即可 - WARN hdfs.DFSUtil: Namenode for null remains unresolved for ID null. Check your hdfs-site.xml file to ensure namenodes are configured properly.
解决:将haoop的core-site.xml中的配置和实际hostname对齐 - 关于启动sshd的方法
可以写一个启动脚本,docker run时指定脚本:
-----/etc/bootstrap.sh--------------
#!/bin/bash
: ${HADOOP_PREFIX:=/usr/local/hadoop}
$HADOOP_PREFIX/etc/hadoop/hadoop-env.sh
rm /tmp/*.pid
# altering the core-site configuration
sed s/HOSTNAME/$HOSTNAME/ /usr/local/hadoop/etc/hadoop/core-site.xml.template > /usr/local/hadoop/etc/hadoop/core-site.xml
#service sshd start
/usr/sbin/sshd -D &
$HADOOP_PREFIX/sbin/start-dfs.sh
#$HADOOP_PREFIX/sbin/start-yarn.sh
#$HADOOP_PREFIX/sbin/mr-jobhistory-daemon.sh start historyserver
if [[ $1 == "-d" ]]; then
while true; do sleep 1000; done
fi
if [[ $1 == "-bash" ]]; then
/bin/bash
fi
然后在运行时指定即可:
docker run -it XXX /etc/bootstrap.sh -bash