【Hadoop学起来】Hadoop2.9.0的单机模式成功运行!!!

正文之前

今天新装了Ubuntu 17.10 感觉贼棒,具体怎么搞双系统大家伙可以看我以前的文章,很详细了。所以我就重新弄了个hadoop,希望本次能成功了。感觉功能更加接近mac了。图形界面也更加有好了。不过这也是趋势,没办法,计算机的内存,计算速度上来了,如果用户体验还是i这么差得花,linux很难全面铺开啊。所以现在好看,好用多了。

《【Hadoop学起来】Hadoop2.9.0的单机模式成功运行!!!》

正文

一 设置用户

废话也不多说了。直接进入正题吧。首先,我们要用hadoop还是创建一个超级用户(组)吧。

ubuntu@ubuntu:~$ sudo addgroup hadoop   
ubuntu@ubuntu:~$ sudo adduser --ingroup hadoop hadoop

然后是一波赋予超级权限,当然实际生产不要这样了。

ubuntu@ubuntu:~$ sudo vim /etc/sudoers 

《【Hadoop学起来】Hadoop2.9.0的单机模式成功运行!!!》

改动的地方就在图中,hadoop那一行就是全部改动,奉劝用nano,vim不知道抽风还是怎么了,完全是只读。

二 配置Java环境

然后是Java环境,这个很简单那啦!上官网下载JDK就ok,然后解压到你的指定文件夹,这个文件夹就是你以后的JAVA_HOME 了。看我的:

《【Hadoop学起来】Hadoop2.9.0的单机模式成功运行!!!》

之后到/etc/profile里面去改动:

《【Hadoop学起来】Hadoop2.9.0的单机模式成功运行!!!》

export JAVA_HOME=/home/hustwolf/Downloads/jdk-9.0.4/
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin

不过我居然还是没法在hadoop用户下使用java ,所以我改了/home/hadoop/.profile 加入了上面的那四句,然后运行source /et c/profile 就可以使用java了。看看安装没的第一手段是 java -version

《【Hadoop学起来】Hadoop2.9.0的单机模式成功运行!!!》

三 配置ssh免密码登陆

然后要配置ssh:

hadoop@hustwolf-Inspiron-5447:~$ ssh-keygen -t rsa -P "" 
# 如果出现要你输入啥哦,直接enter
hadoop@hustwolf-Inspiron-5447:~$cd /home/hadoop/.ssh/
hadoop@hustwolf-Inspiron-5447:~/.ssh$ cat id_rsa.pub >>authorized_keys

然后测试是否成功了。

hadoop@hustwolf-Inspiron-5447:~/.ssh$ ssh localhost 
The authenticity of host 'localhost (127.0.0.1)' can't be established.
ECDSA key fingerprint is SHA256:ryMzL3S70JO+KrbTDABZWONf/dCp4g.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 17.10 (GNU/Linux 4.13.0-25-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage


0 个可升级软件包。
0 个安全更新。

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

hadoop@hustwolf-Inspiron-5447:~$ exit
注销
Connection to localhost closed.

跟上面一样就ok了,感觉是哦。

四 配置hadoop

上我以前写的一篇hadoop的文章去找一下最新的hadoop包下载,然后解压:

tar -xvzf hadoop.tar.gz

然后多了一个文件夹,把这个文件夹移动到hadoop的用户目录下去,最后就得到了如下的画面

《【Hadoop学起来】Hadoop2.9.0的单机模式成功运行!!!》

(可能有时候会存在hadoop文件夹没法读写,那就把这个文件夹设置为所有人都可以读写。sudo chmod 777 hadoop)

另外,最好检查下你的hadoop是什么版本?

《【Hadoop学起来】Hadoop2.9.0的单机模式成功运行!!!》

最下面出来了个64-bit LSB,so 我的是64位, 跟我的电脑搭配。

五 Hadoop文件配置

接下来就进行文件配置了。配置hadoop-env.sh文件(hadoop-env.sh文件在hadoop/etc/hadoop路径下面)

《【Hadoop学起来】Hadoop2.9.0的单机模式成功运行!!!》
《【Hadoop学起来】Hadoop2.9.0的单机模式成功运行!!!》 此时我所在的位置哦

保存配置,并使其生效。

hadoop@hustwolf-Inspiron-5447:~/hadoop$ source etc/hadoop/hadoop-env.sh 

再到/etc/profile中添加HADOOP_INSTALL并修改PATH,结果为

hadoop@hustwolf-Inspiron-5447:~/hadoop$ sudo nano /etc/profile
export JAVA_HOME=/home/hustwolf/Downloads/jdk-9.0.4/
export JRE_HOME=$JAVA_HOME/jre
export HADOOP_INSTALL=/home/hadoop/hadoop
export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin:$HADOOP_INSTALL/bin:$HADOOP_INST$

export GST_ID3_TAG_ENCODING=GBK:UTF-8:GB18030
export GST_ID3V2_TAG_ENCODING=GBK:UTF-8:GB18030

《【Hadoop学起来】Hadoop2.9.0的单机模式成功运行!!!》

此时我们就已经配置好了基本的hadoop环境了,看下面!!

hadoop@hustwolf-Inspiron-5447:~/hadoop$ hadoop version
Hadoop 2.9.0
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r 756ebc8394e473ac25feac05fa493f6d612e6c50
Compiled by arsuresh on 2017-11-13T23:15Z
Compiled with protoc 2.5.0
From source with checksum 0a76a9a32a5257331741f8d5932f183
This command was run using /home/hadoop/hadoop/share/hadoop/common/hadoop-common-2.9.0.jar
hadoop@hustwolf-Inspiron-5447:~/hadoop$ 

当然这些不是重头戏,在下面呢!!

六 重头戏

现在运行一下hadoop自带的例子wordcount来感受以下MapReduce过程:

在hadoop目录下新建input文件夹

《【Hadoop学起来】Hadoop2.9.0的单机模式成功运行!!!》

然后建立一个output吧:

hadoop@hustwolf-Inspiron-5447:~/hadoop$ sudo mkdir output

然后随便拷贝点东西进去?我试试先

hadoop@hustwolf-Inspiron-5447:~/hadoop$ sudo cp README.txt input/

运行wordcount程序,并将结果保存到output中(注意input所在路径、jar所在路径)

《【Hadoop学起来】Hadoop2.9.0的单机模式成功运行!!!》

好的,测试过了。请大家回去把output删了。因为不需要啊!!!!

hadoop@hustwolf-Inspiron-5447:~/hadoop$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.0.jar wordcount input/README.txt output
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/home/hadoop/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.0.jar) to method sun.security.krb5.Config.getInstance()
WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2018-01-19 22:44:02,302 INFO  [main] Configuration.deprecation (Configuration.java:logDeprecation(1297)) - session.id is deprecated. Instead, use dfs.metrics.session-id

2018-01-19 22:44:02,306 INFO  [main] jvm.JvmMetrics (JvmMetrics.java:init(79)) - Initializing JVM Metrics with processName=JobTracker, sessionId=
org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory file:/home/hadoop/hadoop/output already exists

基本看不懂,但是最后一句懂了啊。所以果断删除之。

hadoop@hustwolf-Inspiron-5447:~/hadoop$ sudo rmdir output

然后运行下面这句:

hadoop@hustwolf-Inspiron-5447:~/hadoop$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.0.jar wordcount input/README.txt output

结果如下!!!有结果!!!没错,你没看错,单机模式!!成功!!

WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/home/hadoop/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.0.jar) to method sun.security.krb5.Config.getInstance()
WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2018-01-19 22:44:27,302 INFO  [main] Configuration.deprecation (Configuration.java:logDeprecation(1297)) - session.id is deprecated. Instead, use dfs.metrics.session-id
2018-01-19 22:44:27,308 INFO  [main] jvm.JvmMetrics (JvmMetrics.java:init(79)) - Initializing JVM Metrics with processName=JobTracker, sessionId=
2018-01-19 22:44:27,795 INFO  [main] input.FileInputFormat (FileInputFormat.java:listStatus(289)) - Total input files to process : 1
2018-01-19 22:44:27,912 INFO  [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(204)) - number of splits:1
2018-01-19 22:44:28,606 INFO  [main] mapreduce.JobSubmitter (JobSubmitter.java:printTokens(300)) - Submitting tokens for job: job_local1329426328_0001
2018-01-19 22:44:28,934 INFO  [main] mapreduce.Job (Job.java:submit(1574)) - The url to track the job: http://localhost:8080/
2018-01-19 22:44:28,937 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1619)) - Running job: job_local1329426328_0001
2018-01-19 22:44:28,940 INFO  [Thread-3] mapred.LocalJobRunner (LocalJobRunner.java:createOutputCommitter(500)) - OutputCommitter set in config null
2018-01-19 22:44:28,955 INFO  [Thread-3] output.FileOutputCommitter (FileOutputCommitter.java:<init>(123)) - File Output Committer Algorithm version is 1
2018-01-19 22:44:28,956 INFO  [Thread-3] output.FileOutputCommitter (FileOutputCommitter.java:<init>(138)) - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2018-01-19 22:44:28,957 INFO  [Thread-3] mapred.LocalJobRunner (LocalJobRunner.java:createOutputCommitter(518)) - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
2018-01-19 22:44:29,070 INFO  [Thread-3] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(477)) - Waiting for map tasks
2018-01-19 22:44:29,071 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:run(251)) - Starting task: attempt_local1329426328_0001_m_000000_0
2018-01-19 22:44:29,150 INFO  [LocalJobRunner Map Task Executor #0] output.FileOutputCommitter (FileOutputCommitter.java:<init>(123)) - File Output Committer Algorithm version is 1
2018-01-19 22:44:29,150 INFO  [LocalJobRunner Map Task Executor #0] output.FileOutputCommitter (FileOutputCommitter.java:<init>(138)) - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2018-01-19 22:44:29,180 INFO  [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:initialize(619)) -  Using ResourceCalculatorProcessTree : [ ]
2018-01-19 22:44:29,188 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:runNewMapper(762)) - Processing split: file:/home/hadoop/hadoop/input/README.txt:0+1366
2018-01-19 22:44:29,255 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:setEquator(1212)) - (EQUATOR) 0 kvi 26214396(104857584)
2018-01-19 22:44:29,255 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(1005)) - mapreduce.task.io.sort.mb: 100
2018-01-19 22:44:29,255 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(1006)) - soft limit at 83886080
2018-01-19 22:44:29,256 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(1007)) - bufstart = 0; bufvoid = 104857600
2018-01-19 22:44:29,256 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(1008)) - kvstart = 26214396; length = 6553600
2018-01-19 22:44:29,262 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:createSortingCollector(403)) - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2018-01-19 22:44:29,279 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(620)) - 
2018-01-19 22:44:29,280 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1469)) - Starting flush of map output
2018-01-19 22:44:29,280 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1491)) - Spilling map output
2018-01-19 22:44:29,280 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1492)) - bufstart = 0; bufend = 2055; bufvoid = 104857600
2018-01-19 22:44:29,280 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1494)) - kvstart = 26214396(104857584); kvend = 26213684(104854736); length = 713/6553600
2018-01-19 22:44:29,391 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:sortAndSpill(1681)) - Finished spill 0
2018-01-19 22:44:29,441 INFO  [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:done(1099)) - Task:attempt_local1329426328_0001_m_000000_0 is done. And is in the process of committing
2018-01-19 22:44:29,458 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(620)) - map
2018-01-19 22:44:29,458 INFO  [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:sendDone(1219)) - Task 'attempt_local1329426328_0001_m_000000_0' done.
2018-01-19 22:44:29,458 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:run(276)) - Finishing task: attempt_local1329426328_0001_m_000000_0
2018-01-19 22:44:29,459 INFO  [Thread-3] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(485)) - map task executor complete.
2018-01-19 22:44:29,464 INFO  [Thread-3] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(477)) - Waiting for reduce tasks
2018-01-19 22:44:29,464 INFO  [pool-4-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:run(329)) - Starting task: attempt_local1329426328_0001_r_000000_0
2018-01-19 22:44:29,478 INFO  [pool-4-thread-1] output.FileOutputCommitter (FileOutputCommitter.java:<init>(123)) - File Output Committer Algorithm version is 1
2018-01-19 22:44:29,478 INFO  [pool-4-thread-1] output.FileOutputCommitter (FileOutputCommitter.java:<init>(138)) - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2018-01-19 22:44:29,479 INFO  [pool-4-thread-1] mapred.Task (Task.java:initialize(619)) -  Using ResourceCalculatorProcessTree : [ ]
2018-01-19 22:44:29,502 INFO  [pool-4-thread-1] mapred.ReduceTask (ReduceTask.java:run(362)) - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@59bca4a1
2018-01-19 22:44:29,521 INFO  [pool-4-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:<init>(207)) - MergerManager: memoryLimit=375809632, maxSingleShuffleLimit=93952408, mergeThreshold=248034368, ioSortFactor=10, memToMemMergeOutputsThreshold=10
2018-01-19 22:44:29,524 INFO  [EventFetcher for fetching Map Completion Events] reduce.EventFetcher (EventFetcher.java:run(61)) - attempt_local1329426328_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
2018-01-19 22:44:29,552 INFO  [localfetcher#1] reduce.LocalFetcher (LocalFetcher.java:copyMapOutput(145)) - localfetcher#1 about to shuffle output of map attempt_local1329426328_0001_m_000000_0 decomp: 1832 len: 1836 to MEMORY
2018-01-19 22:44:29,558 INFO  [localfetcher#1] reduce.InMemoryMapOutput (InMemoryMapOutput.java:doShuffle(93)) - Read 1832 bytes from map-output for attempt_local1329426328_0001_m_000000_0
2018-01-19 22:44:29,559 INFO  [localfetcher#1] reduce.MergeManagerImpl (MergeManagerImpl.java:closeInMemoryFile(322)) - closeInMemoryFile -> map-output of size: 1832, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->1832
2018-01-19 22:44:29,579 INFO  [EventFetcher for fetching Map Completion Events] reduce.EventFetcher (EventFetcher.java:run(76)) - EventFetcher is interrupted.. Returning
2018-01-19 22:44:29,580 INFO  [pool-4-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(620)) - 1 / 1 copied.
2018-01-19 22:44:29,580 INFO  [pool-4-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(694)) - finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs
2018-01-19 22:44:29,663 INFO  [pool-4-thread-1] mapred.Merger (Merger.java:merge(606)) - Merging 1 sorted segments
2018-01-19 22:44:29,664 INFO  [pool-4-thread-1] mapred.Merger (Merger.java:merge(705)) - Down to the last merge-pass, with 1 segments left of total size: 1823 bytes
2018-01-19 22:44:29,667 INFO  [pool-4-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(761)) - Merged 1 segments, 1832 bytes to disk to satisfy reduce memory limit
2018-01-19 22:44:29,668 INFO  [pool-4-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(791)) - Merging 1 files, 1836 bytes from disk
2018-01-19 22:44:29,669 INFO  [pool-4-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(806)) - Merging 0 segments, 0 bytes from memory into reduce
2018-01-19 22:44:29,669 INFO  [pool-4-thread-1] mapred.Merger (Merger.java:merge(606)) - Merging 1 sorted segments
2018-01-19 22:44:29,670 INFO  [pool-4-thread-1] mapred.Merger (Merger.java:merge(705)) - Down to the last merge-pass, with 1 segments left of total size: 1823 bytes
2018-01-19 22:44:29,671 INFO  [pool-4-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(620)) - 1 / 1 copied.
2018-01-19 22:44:29,674 INFO  [pool-4-thread-1] Configuration.deprecation (Configuration.java:logDeprecation(1297)) - mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
2018-01-19 22:44:29,685 INFO  [pool-4-thread-1] mapred.Task (Task.java:done(1099)) - Task:attempt_local1329426328_0001_r_000000_0 is done. And is in the process of committing
2018-01-19 22:44:29,688 INFO  [pool-4-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(620)) - 1 / 1 copied.
2018-01-19 22:44:29,688 INFO  [pool-4-thread-1] mapred.Task (Task.java:commit(1260)) - Task attempt_local1329426328_0001_r_000000_0 is allowed to commit now
2018-01-19 22:44:29,690 INFO  [pool-4-thread-1] output.FileOutputCommitter (FileOutputCommitter.java:commitTask(582)) - Saved output of task 'attempt_local1329426328_0001_r_000000_0' to file:/home/hadoop/hadoop/output/_temporary/0/task_local1329426328_0001_r_000000
2018-01-19 22:44:29,691 INFO  [pool-4-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(620)) - reduce > reduce
2018-01-19 22:44:29,692 INFO  [pool-4-thread-1] mapred.Task (Task.java:sendDone(1219)) - Task 'attempt_local1329426328_0001_r_000000_0' done.
2018-01-19 22:44:29,692 INFO  [pool-4-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:run(352)) - Finishing task: attempt_local1329426328_0001_r_000000_0
2018-01-19 22:44:29,693 INFO  [Thread-3] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(485)) - reduce task executor complete.
2018-01-19 22:44:29,998 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1640)) - Job job_local1329426328_0001 running in uber mode : false
2018-01-19 22:44:30,001 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1647)) -  map 100% reduce 100%
2018-01-19 22:44:30,002 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1658)) - Job job_local1329426328_0001 completed successfully
2018-01-19 22:44:30,039 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1665)) - Counters: 30
    File System Counters
        FILE: Number of bytes read=613394
        FILE: Number of bytes written=1568868
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
    Map-Reduce Framework
        Map input records=31
        Map output records=179
        Map output bytes=2055
        Map output materialized bytes=1836
        Input split bytes=106
        Combine input records=179
        Combine output records=131
        Reduce input groups=131
        Reduce shuffle bytes=1836
        Reduce input records=131
        Reduce output records=131
        Spilled Records=262
        Shuffled Maps =1
        Failed Shuffles=0
        Merged Map outputs=1
        GC time elapsed (ms)=16
        Total committed heap usage (bytes)=587202560
    Shuffle Errors
        BAD_ID=0
        CONNECTION=0
        IO_ERROR=0
        WRONG_LENGTH=0
        WRONG_MAP=0
        WRONG_REDUCE=0
    File Input Format Counters 
        Bytes Read=1366
    File Output Format Counters 
        Bytes Written=1326

上面懒得看的话,就直接看结果就ok 了。 看我的目录和命令啊!!

《【Hadoop学起来】Hadoop2.9.0的单机模式成功运行!!!》

hadoop@hustwolf-Inspiron-5447:~/hadoop/output$ cat part-r-00000 
(BIS),  1
(ECCN)  1
(TSU)   1
(see    1
5D002.C.1,  1
740.13) 1
<http://www.wassenaar.org/> 1
Administration  1
Apache  1
BEFORE  1
BIS 1
Bureau  1
Commerce,   1
Commodity   1
Control 1
Core    1
Department  1
ENC 1
Exception   1
Export  2
For 1
Foundation  1
Government  1
Hadoop  1
Hadoop, 1
Industry    1
Jetty   1
License 1
Number  1
Regulations,    1
SSL 1
Section 1
Security    1
See 1
Software    2
Technology  1
The 4
This    1
U.S.    1
Unrestricted    1
about   1
algorithms. 1
and 6
and/or  1
another 1
any 1
as  1
asymmetric  1
at: 2
both    1
by  1
check   1
classified  1
code    1
code.   1
concerning  1
country 1
country's   1
country,    1
cryptographic   3
currently   1
details 1
distribution    2
eligible    1
encryption  3
exception   1
export  1
following   1
for 3
form    1
from    1
functions   1
has 1
have    1
http://hadoop.apache.org/core/  1
http://wiki.apache.org/hadoop/  1
if  1
import, 2
in  1
included    1
includes    2
information 2
information.    1
is  1
it  1
latest  1
laws,   1
libraries   1
makes   1
manner  1
may 1
more    2
mortbay.org.    1
object  1
of  5
on  2
or  2
our 2
performing  1
permitted.  1
please  2
policies    1
possession, 2
project 1
provides    1
re-export   2
regulations 1
reside  1
restrictions    1
security    1
see 1
software    2
software,   2
software.   2
software:   1
source  1
the 8
this    3
to  2
under   1
use,    2
uses    1
using   2
visit   1
website 1
which   2
wiki,   1
with    1
written 1
you 1
your    1

多好的东西啊。单机模式,成功!!!!第一次!!!果然是因为云服务器太菜了么?

溜了溜了!!!Happy

正文之后

我承认,我是照本宣科的。but nothing ,给大家介绍我参考的教程:

最新版hadoop2.7.1单机版与伪分布式安装配置

人家写的比我的还多,但是!!我也有优势啊!!我新啊。我可以回答问题啊,不懂的评论或者简信见啊!!我长期在线的!

    原文作者:HustWolf
    原文地址: https://www.jianshu.com/p/92f94eb5f7d2
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞