一、Hadoop之三大组件
HDFS———->数据存储
MapReduce—>作业计算框架
Yarn———–>资源调度
二、HDFS
1、启动HDFS查看进程
[hadoop@hadoop001 hadoop]$ sbin/start-dfs.sh Starting namenodes on [hadoop001] hadoop001: starting namenode, logging to /opt/sourcecode/hadoop-2.8.1/logs/hadoop-hadoop-namenode-hadoop001.out hadoop001: starting datanode, logging to /opt/sourcecode/hadoop-2.8.1/logs/hadoop-hadoop-datanode-hadoop001.out Starting secondary namenodes [hadoop001] hadoop001: starting secondarynamenode, logging to /opt/sourcecode/hadoop-2.8.1/logs/hadoop-hadoop-secondarynamenode-hadoop001.out
- 1
- 2
- 3
- 4
- 5
- 6
2、由此可见HDFS进程有三个,分别为NameNode(nn)、DateNode(dn)、SecondaryNameNode(snn)
3、NameNode 通过core-site.html配置中192.168.187.111(hadoop001)启动
<property> <name>fs.defaultFS</name> <value>hdfs://192.168.187.111:9000</value> </property>
- 1
- 2
- 3
- 4
4、DateNode通过etc/slave配置来启动,里面默认值位localhost,需将localhost改成hadoop001。
[hadoop@hadoop001 hadoop]$ cat slaves hadoop001
- 1
- 2
5、SecondaryNameNode启动从官网得知默认地址为0.0.0.0,所需在hdfs-site.xml添加以下配置内容
<property> <name>dfs.namenode.secondary.http-address</name> <value>192.168.187.111:50090</value> </property> <property> <name>dfs.namenode.secondary.https-address</name> <value>192.168.187.111:50091</value> </property>
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
6、以上配置内容如果不改采取默认值那么启动HDFS如下,DataNode/SecondaryNameNode分别从localhost、0.0.0.0启动而且会让其输入密码。
[hadoop@hadoop001 hadoop]$ sbin/start-dfs.sh
Starting namenodes on [hadoop001] hadoop001: starting namenode, logging to /opt/sourcecode/hadoop-2.8.1/logs/hadoop-hadoop-namenode-hadoop001.out localhost: starting datanode, logging to /opt/sourcecode/hadoop-2.8.1/logs/hadoop-hadoop-datanode-hadoop001.out Starting secondary namenodes [hadoop001] 0.0.0.0: starting secondarynamenode, logging to /opt/sourcecode/hadoop-2.8.1/logs/hadoop-hadoop-secondarynamenode-hadoop001.out
- 1
- 2
- 3
- 4
- 5
- 6
7、查看官网得知HDFS进程要启动成功需要配置core-site.xml、hdfs-site.xml两个文件
core-site.xml文件配置
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://localhost:9000</value> </property> </configuration>
- 1
- 2
- 3
- 4
- 5
- 6
hdfs-site.xml文件配置(完全分布式副本集应该为3,这里1是伪分布式)
<configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration>
- 1
- 2
- 3
- 4
- 5
- 6
二、MapReduce
MapReduce本身没有进程,只有当有map作业运行时才会存在进程。
You can run a MapReduce job on YARN in a pseudo-distributed mode by setting a few parameters and running ResourceManager daemon and NodeManager daemon in addition.
- 1
从官网这句话可以看出map作业是运行在YARN上的,所以mapred-site.xml配置文件需做如下配置
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>
- 1
- 2
- 3
- 4
- 5
- 6
三、YARN
1、要启动yarn进程必须先配置yarn-site.xml文件,配置如下
<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>
- 1
- 2
- 3
- 4
- 5
- 6
2、启动yarn进程
sbin/start-yarn.sh
[hadoop@hadoop001 hadoop]$ sbin/start-yarn.sh starting yarn daemons starting resourcemanager, logging to /opt/sourcecode/hadoop-2.8.1/logs/yarn-hadoop-resourcemanager-hadoop001.out hadoop001: starting nodemanager, logging to /opt/sourcecode/hadoop-2.8.1/logs/yarn-hadoop-nodemanager-hadoop001.out
- 1
- 2
- 3
- 4
由此看出YARN有resourcemanager/nodemanager两个进程,从官网得知当YARN进程启动后可以通过http://localhost:8088/网址来查看集群健康(内存、磁盘、作业、IO等)情况。
当运行一个map job时可以在web界面查看job状态(RUNING、SUCCEEDED、FAILED)