Hadoop官方文档翻译 —— MapReduce(二)

Reducer

Reduce处理一系列相同key的中间记录。

用户可以通过 Job.setNumReduceTasks(int) 来设置reduce的数量。

总的来说,通过 Job.setReducerClass(Class) 可以给 job 设置 recuder 的实现类并且进行初始化。框架将会调用 reduce 方法来处理每一组按照一定规则分好的输入数据,应用可以通过复写cleanup 方法执行任何清理工作。

Reducer有3个主要阶段:混洗、排序和reduce。

Shuffle(混洗)

输出到Reducer的数据都在Mapper阶段经过排序的。在这个阶段框架将通过HTTP从恰当的Mapper的分区中取得数据。

Sort(排序)

这个阶段框架将对输入到的 Reducer 的数据通过key(不同的 Mapper 可能输出相同的 key)进行分组。

混洗和排序阶段是同时进行;map的输出数据被获取时会进行合并。

Secondary Sort(二次排序)

如果想要对中间记录实现与 map 阶段不同的排序方式,可以通过Job.setSortComparatorClass(Class) 来设置一个比较器 。Job.setGroupingComparatorClass(Class) 被用于控制中间记录的排序方式,这些能用来进行值的二次排序。

Reduce

在这个阶段reduce方法将会被调用来处理每个已经分好的组键值对。

reduce 任务一般通过 Context.write(WritableComparable, Writable) 将数据写入到FileSystem。

应用可以使用 Counter 进行统计。

Recuder 输出的数据是不经过排序的。

How Many Reduces?

合适的 reduce 总数应该在 节点数*每个节点的容器数*0.95 至 节点数*每个节点的容器数*1.75 之间。

当设定值为0.95时,map任务结束后所有的 reduce 将会立刻启动并且开始转移数据,当设定值为1.75时,处理更多任务的时候将会快速地一轮又一轮地运行 reduce 达到负载均衡。

reduce 的数目的增加将会增加框架的负担(天花板),但是会提高负载均衡和降低失败率。

整体的规模将会略小于总数,因为有一些 reduce slot 用来存储推测任务和失败任务。

Reducer NONE

当没有 reduction 需求的时候可以将 reduce-task 的数目设置为0,是允许的。

在这种情况当中,map任务将直接输出到 FileSystem,可通过  FileOutputFormat.setOutputPath(Job, Path) 来设置。该框架不会对输出的 FileSystem 的数据进行排序。

Partitioner

Partitioner对key进行分区。

Partitioner 对 map 输出的中间值的 key(Recuder之前)进行分区。分区采用的默认方法是对 key 取 hashcode。分区数等于 job 的 reduce 任务数。因此这会根据中间值的 key 将数据传输到对应的 reduce。

HashPartitioner 是默认的的分区器。

Counter

计数器是一个工具用于报告 Mapreduce 应用的统计。

Mapper Reducer 实现类可使用计数器来报告统计值。

Hadoop Mapreduce 是普遍的可用的 Mappers、Reducers Partitioners 组成的一个库。

下面是原文

Reducer

Reducerreduces a set of intermediate values which share a key to a smaller set of values.

The number of reduces for the job is set by the user viaJob.setNumReduceTasks(int).

Overall,Reducer implementations are passed the Job for the job via theJob.setReducerClass(Class)method and can override it to initialize themselves. The framework then callsreduce(WritableComparable,Iterable, Context)method for each pair in the grouped inputs. Applications can then override the cleanup(Context)method to perform any required cleanup.

Reducer has 3 primary phases: shuffle, sort and reduce.

Shuffle

Input to the Reducer is the sorted output of the mappers. In this phase the framework fetches(取得)the relevant partition of the output of all the mappers, via HTTP.

Sort

The framework groups Reducer inputs by keys (since different mappers may have output the same key) in this stage(阶段).

The shuffle and sort phases occur simultaneously(同时); while map-outputs are being fetched they are merged.

Secondary Sort

If equivalence rules for grouping the intermediate keys are required to be different from those for grouping keys before reduction, then one may specify a Comparator viaJob.setSortComparatorClass(Class). SinceJob.setGroupingComparatorClass(Class)can be used to control how intermediate keys are grouped, these can be used in conjunction(协调)to simulate(模拟)secondary sort on values.

Reduce

In this phase the reduce(WritableComparable, Iterable, Context) method is called for each pair in the grouped inputs.

The output of the reduce task is typically written to theFileSystemvia  Context.write(WritableComparable, Writable).

Applications can use the Counter to report its statistics.

The output of the Reducer isnot sorted.

How Many Reduces?

The right number of reduces seems to be 0.95 or 1.75 multiplied(乘上)by(<no. of nodes> * <no. of maximum containers per node>).

With 0.95 all of the reduces can launch immediately(立刻)and start transferring map outputs as the maps finish. With 1.75 the faster nodes will finish their first round of reduces and launch a second wave(波浪)of reduces doing a much better job of load balancing(均衡).

Increasing the number of reduces increases the framework overhead(负担,天花板), but increases load balancing and lowers the cost of failures.

The scaling(规模)factors above are slightly(轻微的)less than whole numbers to reserve a few reduce slots in the framework for speculative(推测的)-tasks and failed tasks.

Reducer NONE

It is legal to set the number of reduce-tasks tozeroif no reduction is desired.

In this case the outputs of the map-tasks go directly to the FileSystem, into the output path set byFileOutputFormat.setOutputPath(Job,Path). The framework does not sort the map-outputs before writing them out to the FileSystem.

Partitioner

Partitionerpartitions the key space.

Partitioner controls the partitioning of the keys of the intermediate map-outputs. The key (or a subset(子集)of the key) is used to derive(取得;源自)the partition, typically by ahash function. The total number of partitions is the same as the number of reduce tasks for the job. Hence this controls which of them reduce tasks the intermediate key (and hence the record) is sent to for reduction.

HashPartitioneris the default Partitioner.

Counter

Counteris a facility for MapReduce applications to report its statistics.

Mapper and Reducer implementations can use the Counter to report statistics.

Hadoop MapReduce comes bundled with alibraryof generally(普遍的)useful mappers, reducers, and partitioners.

*由于翻译能力不足所出现的错误,请多多指出和包涵

    原文作者:_和_
    原文地址: https://www.jianshu.com/p/5ac1b483b049
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞