【Spark Java API】Action(1)—reduce、aggregate

reduce

官方文档描述:

Reduces the elements of this RDD using the specified commutative and associative binary operator.

函数原型:

def reduce(f: JFunction2[T, T, T]): T

根据映射函数f,对RDD中的元素进行二元计算(满足交换律和结合律),返回计算结果。

源码分析:

def reduce(f: (T, T) => T): T = withScope {  
  val cleanF = sc.clean(f)  
  val reducePartition: Iterator[T] => Option[T] = iter => {    
    if (iter.hasNext) {      
        Some(iter.reduceLeft(cleanF))    
    } else {      
        None    
    }  
  }  
  var jobResult: Option[T] = None  
  val mergeResult = (index: Int, taskResult: Option[T]) => {    
      if (taskResult.isDefined) {      
        jobResult = jobResult match {        
          case Some(value) => Some(f(value, taskResult.get))        
          case None => taskResult      
        }    
      }  
   }  
   sc.runJob(this, reducePartition, mergeResult)  
  // Get the final result out of our Option, or throw an exception if the RDD was empty  
  jobResult.getOrElse(throw new UnsupportedOperationException("empty collection"))
}

从源码中可以看出,reduce函数相当于对RDD中的元素进行reduceLeft函数操作,reduceLeft函数是从列表的左边往右边应用reduce函数;之后,在driver端对结果进行合并处理,因此,如果分区数量过多或者自定义函数过于复杂,对driver端的负载比较重。

实例:

JavaSparkContext javaSparkContext = new JavaSparkContext(sparkConf);

List<Integer> data = Arrays.asList(5, 1, 1, 4, 4, 2, 2);

JavaRDD<Integer> javaRDD = javaSparkContext.parallelize(data,3);

Integer reduceRDD = javaRDD.reduce(new Function2<Integer, Integer, Integer>() {    
  @Override    
  public Integer call(Integer v1, Integer v2) throws Exception {        
      return v1 + v2;    
  }
});
System.out.println("~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~" + reduceRDD);

aggregate

官方文档描述:

Aggregate the elements of each partition, and then the results for all the partitions, 
using given combine functions and a neutral "zero value". This function can return a different result type, U, 
than the type of this RDD, T. Thus, we need one operation for merging a T into an U and one operation for merging two U's, 
as in scala.TraversableOnce. Both of these functions are allowed to modify and return their first argument 
instead of creating a new U to avoid memory allocation.

函数原型:

def aggregate[U](zeroValue: U)(seqOp: JFunction2[U, T, U],  combOp: JFunction2[U, U, U]): U

aggregate合并每个区分的每个元素,然后在对分区结果进行merge处理,这个函数最终返回的类型不需要和RDD中元素类型一致。

源码分析:

def aggregate[U: ClassTag](zeroValue: U)(seqOp: (U, T) => U, combOp: (U, U) => U): U = withScope {  
  // Clone the zero value since we will also be serializing it as part of tasks  
  var jobResult = Utils.clone(zeroValue, sc.env.serializer.newInstance())  
  val cleanSeqOp = sc.clean(seqOp)  
  val cleanCombOp = sc.clean(combOp)  
  val aggregatePartition = (it: Iterator[T]) => it.aggregate(zeroValue)(cleanSeqOp, cleanCombOp)  
  val mergeResult = (index: Int, taskResult: U) => jobResult = combOp(jobResult, taskResult)  
  sc.runJob(this, aggregatePartition, mergeResult)  
  jobResult
}

从源码中可以看出,aggregate函数针对每个分区利用scala集合操作aggregate,再使用comb()将之前每个分区结果聚合。

实例:

JavaSparkContext javaSparkContext = new JavaSparkContext(sparkConf);
List<Integer> data = Arrays.asList(5, 1, 1, 4, 4, 2, 2);
JavaRDD<Integer> javaRDD = javaSparkContext.parallelize(data,3);
Integer aggregateRDD = javaRDD.aggregate(2, new Function2<Integer, Integer, Integer>() {    
    @Override    
    public Integer call(Integer v1, Integer v2) throws Exception {        
        return v1 + v2;    
    }
}, new Function2<Integer, Integer, Integer>() {    
    @Override    
    public Integer call(Integer v1, Integer v2) throws Exception {          
        return v1 + v2;    
    }
});
System.out.println("~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~" + aggregateRDD);
    原文作者:小飞_侠_kobe
    原文地址: https://www.jianshu.com/p/de88317c8b83
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞