hadoop – 设备异常,亚马逊EMR媒体实例和S3没有剩余空间

我在Amazon EMR上运行MapReduce作业,创建了40个输出文件,每个文件大约130MB.最后9个减少任务失败,“设备上没有剩余空间”异常.这是集群的错误配置问题吗?作业运行没有问题,输入文件更少,输出文件更少,减少器更少.任何帮助都感激不尽.谢谢!

下面的完整堆栈跟踪:

Error: java.io.IOException: No space left on device
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:345)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
at java.security.DigestOutputStream.write(DigestOutputStream.java:148)
at com.amazon.ws.emr.hadoop.fs.s3n.MultipartUploadOutputStream.write(MultipartUploadOutputStream.java:135)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:60)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:83)
at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92)
at org.apache.hadoop.io.compress.CompressorStream.close(CompressorStream.java:105)
at java.io.FilterOutputStream.close(FilterOutputStream.java:160)
at org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.close(TextOutputFormat.java:111)
at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.close(ReduceTask.java:558)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:637)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:390)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)

编辑

我做了一些进一步的尝试但不幸的是我仍然遇到错误.
我认为由于下面评论中提到的复制因素,我的实例可能没有足够的内存,所以我尝试使用大型而不是中型实例,直到现在我才进行实验.但这次我又得到了另一个例外:

Error: java.io.IOException: Error closing multipart upload
at com.amazon.ws.emr.hadoop.fs.s3n.MultipartUploadOutputStream.uploadMultiParts(MultipartUploadOutputStream.java:207)
at com.amazon.ws.emr.hadoop.fs.s3n.MultipartUploadOutputStream.close(MultipartUploadOutputStream.java:222)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:105)
at org.apache.hadoop.io.compress.CompressorStream.close(CompressorStream.java:106)
at java.io.FilterOutputStream.close(FilterOutputStream.java:160)
at org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.close(TextOutputFormat.java:111)
at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.close(ReduceTask.java:558)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:637)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:390)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
Caused by: java.util.concurrent.ExecutionException:       com.amazonaws.services.s3.model.AmazonS3Exception: The Content-MD5 you specified did not match what we received. (Service: Amazon S3; Status Code: 400; Error Code: BadDigest; 
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:188) 

结果是只生成了大约70%的预期输出文件,剩余的reduce任务失败.然后我尝试将一个大文件上传到我的S3存储桶,以防没有足够的内存,但这似乎不是问题.

我正在使用aws Elastic MapReduce服务.有任何想法吗?

最佳答案 问题意味着没有空间来存储MapReduce作业的输出(或临时输出).

有些事情需要检查:

>您是否从HDFS中删除了不必要的文件?运行hadoop dfs -ls /命令检查存储在HDFS上的文件. (如果您使用垃圾箱,请确保将其清空.)
>您是否使用压缩来存储作业的输出(或临时输出)?您可以通过将SequenceFileOutputFormat设置为输出格式,或者通过设置setCompressMapOutput(true)来完成此操作;
>什么是复制因子?默认情况下,它设置为3,但如果存在空间问题,您可能会冒险将其设置为2或1,以使程序运行.

可能会出现一些问题,即某些减速器输出的数据量远远超过其他减速器,因此请检查您的代码.

点赞