搭建Hadoop-HA + ZooKeeper + Yarn + Hive环境

前提:搭建Hadoop-HA + ZooKeeper + Yarn环境

node01node02node03node04
NameNode01NameNode02NameNode03
DataNode01DataNode02DataNode03
JournalNode01JournalNode02JournalNode03
ZooKeeper01ZooKeeper02ZooKeeper03
ZooKeeperFailoverController01ZooKeeperFailoverController02ZooKeeperFailoverController03
ResourceManager01ResourceManager02
NodeManager01NodeManager02NodeManager03
MySQL ServerMetaStore ServerHive CLI
  1. 配置node01上的MySQL服务

安装MySQL:
yum install mysql-server -y
启动MySQL服务:
service mysqld start
修改MySQL权限:
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY '123123' WITH GRANT OPTION;
DELETE FROM user WHERE host != '%';
flush privileges;
登陆MySQL:
mysql -u root -p

  1. 安装node03、node04上的Hive

tar -zxvf apache-hive-2.3.4-bin.tar.gz -C /opt/hive/

  1. 配置node03、node04上的Hive

在node03上修改/opt/hive/apache-hive-2.3.4-bin/conf/hive-site.xml
cp /opt/hive/apache-hive-2.3.4-bin/conf/hive-default.xml.template /opt/hive/apache-hive-2.3.4-bin/conf/hive-site.xml
vim /opt/hive/apache-hive-2.3.4-bin/conf/hive-site.xml
添加:

<configuration>
  <property>  
    <name>hive.metastore.warehouse.dir</name>  
    <value>/hive</value>  
  </property>  
  <property>  
    <name>javax.jdo.option.ConnectionURL</name>  
    <value>jdbc:mysql://node01:3306/hive?createDatabaseIfNotExist=true</value>  
  </property>  
  <property>  
    <name>javax.jdo.option.ConnectionDriverName</name>  
    <value>com.mysql.jdbc.Driver</value>  
  </property>     
  <property>  
    <name>javax.jdo.option.ConnectionUserName</name>  
    <value>root</value>  
  </property>  
  <property>  
    <name>javax.jdo.option.ConnectionPassword</name>  
    <value>123123</value>  
  </property>
</configuration>

在node04上修改/opt/hive/apache-hive-2.3.4-bin/conf/hive-site.xml
cp /opt/hive/apache-hive-2.3.4-bin/conf/hive-default.xml.template /opt/hive/apache-hive-2.3.4-bin/conf/hive-site.xml
vim /opt/hive/apache-hive-2.3.4-bin/conf/hive-site.xml
添加:

<configuration>
  <property>  
    <name>hive.metastore.warehouse.dir</name>  
    <value>/hive</value>  
  </property>  
  <property>  
    <name>hive.metastore.uris</name>  
    <value>thrift://node03:9083</value>  
  </property> 
</configuration>
  1. 添加node03上的MySQL驱动:

mv mysql-connector-java-5.1.32-bin.jar /opt/hive/apache-hive-2.3.4-bin/lib/

  1. 配置node03、node04上的环境变量

在node03、node04上修改/etc/profile
vim /etc/profile
添加:

export HIVE_HOME=/opt/hive/apache-hive-2.3.4-bin
export PATH=$PATH:$HIVE_HOME/bin

在node03、node04上运行:
. /etc/profile

  1. 初始化数据库

在node03上运行:
schematool -dbType mysql -initSchema

  1. 启动Hive服务端

在node3上运行:
hive --service metastore

  1. 启动Hive客户端

在node04上运行:
hive

  1. 配置node01、node02、node03、node04上的Hadoop

在node01、node02、node03、node04上修改/opt/hadoop/hadoop-3.1.1/etc/hadoop/core-site.xml
vim /opt/hadoop/hadoop-3.1.1/etc/hadoop/core-site.xml
添加:

<property>
  <name>hadoop.proxyuser.root.groups</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.root.hosts</name>
  <value>*</value>
</property>
  1. 重启Hadoop

在node01、node02、node03上运行:
hdfs dfsadmin -fs hdfs://node01:8020 -refreshSuperUserGroupsConfiguration

  1. 启动Hive服务端

在node3上运行:
hiveserver2

  1. 启动Hive客户端

在node04上运行:
beeline
!connect jdbc:hive2://node03:10000 root 1

  1. 查看进程

在node01、node02、node03、node04上运行:
jps

    原文作者:上杉丶零
    原文地址: https://www.jianshu.com/p/df1ea4d19469
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞