node01 | node02 | node03 | node04 |
---|---|---|---|
NameNode01 | NameNode02 | NameNode03 | |
DataNode01 | DataNode02 | DataNode03 | |
JournalNode01 | JournalNode02 | JournalNode03 | |
ZooKeeper01 | ZooKeeper02 | ZooKeeper03 | |
ZooKeeperFailoverController01 | ZooKeeperFailoverController02 | ZooKeeperFailoverController03 |
- 配置node01、node02、node03、node04上的Hadoop
在node01上修改
/opt/hadoop/hadoop-3.1.1/etc/hadoop/core-site.xml
:
vim /opt/hadoop/hadoop-3.1.1/etc/hadoop/core-site.xml
添加:<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://automaticHACluster</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/opt/hadoop/data/tmp/automatic_ha</value> </property> <property> <name>ha.zookeeper.quorum</name> <value>node02:2181,node03:2181,node04:2181</value> </property> </configuration>
在node01上修改
/opt/hadoop/hadoop-3.1.1/etc/hadoop/hdfs-site.xml
:
vim /opt/hadoop/hadoop-3.1.1/etc/hadoop/hdfs-site.xml
添加:<configuration> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.nameservices</name> <value>automaticHACluster</value> </property> <property> <name>dfs.ha.namenodes.automaticHACluster</name> <value>NN01,NN02,NN03</value> </property> <property> <name>dfs.namenode.rpc-address.automaticHACluster.NN01</name> <value>node01:8020</value> </property> <property> <name>dfs.namenode.rpc-address.automaticHACluster.NN02</name> <value>node02:8020</value> </property> <property> <name>dfs.namenode.rpc-address.automaticHACluster.NN03</name> <value>node03:8020</value> </property> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://node01:8485;node02:8485;node03:8485/automaticHACluster</value> </property> <property> <name>dfs.client.failover.proxy.provider.automaticHACluster</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <property> <name>dfs.ha.fencing.methods</name> <value>sshfence</value> </property> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/root/.ssh/id_rsa</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>/opt/hadoop/data/tmp/automatic_ha</value> </property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> </configuration>
将node01上的
/opt/hadoop/hadoop-3.1.1/etc/hadoop/core-site.xml
、/opt/hadoop/hadoop-3.1.1/etc/hadoop/hdfs-site.xml
拷贝到node02、node03、node04:
scp /opt/hadoop/hadoop-3.1.1/etc/hadoop/core-site.xml /opt/hadoop/hadoop-3.1.1/etc/hadoop/hdfs-site.xml node02:/opt/hadoop/hadoop-3.1.1/etc/hadoop/ && scp /opt/hadoop/hadoop-3.1.1/etc/hadoop/core-site.xml /opt/hadoop/hadoop-3.1.1/etc/hadoop/hdfs-site.xml node03:/opt/hadoop/hadoop-3.1.1/etc/hadoop/ && scp /opt/hadoop/hadoop-3.1.1/etc/hadoop/core-site.xml /opt/hadoop/hadoop-3.1.1/etc/hadoop/hdfs-site.xml node04:/opt/hadoop/hadoop-3.1.1/etc/hadoop/
- 安装node02、node03、node04上的ZooKeeper
tar -zxvf zookeeper-3.4.9.tar.gz -C /opt/zookeeper/
- 配置node02、node03、node04上的ZooKeeper
在node02上修改
/opt/zookeeper/zookeeper-3.4.9/conf/zoo.cfg
:
cp /opt/zookeeper/zookeeper-3.4.9/conf/zoo_sample.cfg /opt/zookeeper/zookeeper-3.4.9/conf/zoo.cfg
vim /opt/zookeeper/zookeeper-3.4.9/conf/zoo.cfg
修改:dataDir=/opt/zookeeper/data/tmp
添加:
server.1=192.168.163.192:2881:3881 server.2=192.168.163.193:2881:3881 server.3=192.168.163.194:2881:3881
将node02上的
/opt/zookeeper/zookeeper-3.4.9/conf/zoo.cfg
拷贝到node03、node04:
scp /opt/zookeeper/zookeeper-3.4.9/conf/zoo.cfg node03:/opt/zookeeper/zookeeper-3.4.9/conf/ && scp /opt/zookeeper/zookeeper-3.4.9/conf/zoo.cfg node04:/opt/zookeeper/zookeeper-3.4.9/conf/
在node02上执行:
mkdir -p /opt/zookeeper/data/tmp && echo 1 > /opt/zookeeper/data/tmp/myid
在node03上执行:
mkdir -p /opt/zookeeper/data/tmp && echo 2 > /opt/zookeeper/data/tmp/myid
在node04上执行:
mkdir -p /opt/zookeeper/data/tmp && echo 3 > /opt/zookeeper/data/tmp/myid
- 配置node01、node02、node03、node04上的环境变量
在node01上修改
/etc/profile
:
vim /etc/profile
添加:export HDFS_ZKFC_USER=root
在node02上修改
/etc/profile
:
vim /etc/profile
添加:export ZOOKEEPER_PREFIX=/opt/zookeeper/zookeeper-3.4.9 export PATH=$PATH:$ZOOKEEPER_PREFIX/bin export HDFS_ZKFC_USER=root
在node03上修改
/etc/profile
:
vim /etc/profile
添加:export ZOOKEEPER_PREFIX=/opt/zookeeper/zookeeper-3.4.9 export PATH=$PATH:$ZOOKEEPER_PREFIX/bin export HDFS_ZKFC_USER=root
在node04上修改
/etc/profile
:
vim /etc/profile
添加:export ZOOKEEPER_PREFIX=/opt/zookeeper/zookeeper-3.4.9 export PATH=$PATH:$ZOOKEEPER_PREFIX/bin
在node01、node02、node03、node04上运行:
. /etc/profile
- 启动JournalNode
在node01、node02、node03上运行:
hdfs --daemon start journalnode
- 格式化Hadoop
在node01上运行:
hdfs namenode -format
- 启动NameNode
在node01上运行:
hdfs --daemon start namenode
在node02、node03上运行:
hdfs namenode -bootstrapStandby
- 格式化ZooKeeper
在node1上执行:
hdfs zkfc -formatZK
- 启动ZooKeeper
在node02、node03、node04上运行:
zkServer.sh start
- 启动Hadoop
在node01/node02/node03/node04上运行:
start-dfs.sh
- 查看进程
在node01、node02、node03、node04上运行:
jps
- 访问网页
NameNode01:http://192.168.163.191:9870
NameNode02:http://192.168.163.192:9870
NameNode03:http://192.168.163.193:9870
DataNode01:http://192.168.163.192:9864
DataNode02:http://192.168.163.193:9864
DataNode03:http://192.168.163.194:9864