我的spark(v1.5.0)代码中有两个DataFrame:
aDF = [user_id : Int, user_purchases: array<int> ]
bDF = [user_id : Int, user_purchases: array<int> ]
我想要做的是加入这两个数据帧,但我只需要aDF.user_purchases和bDF.user_purchases之间的交集有两个以上元素的行(交集> 2).
我是否必须使用RDD API或是否可以使用org.apache.sql.functions中的某些功能?
最佳答案 我没有看到任何内置函数,但您可以使用UDF:
import scala.collection.mutable.WrappedArray;
val intersect = udf ((a : WrappedArray[Int], b : WrappedArray[Int]) => {
var count = 0;
a.foreach (x => {
if (b.contains(x)) count = count + 1;
});
count;
});
// test data sets
val one = sc.parallelize(List(
(1, Array(1, 2, 3)),
(2, Array(1,2 ,3, 4)),
(3, Array(1, 2,3)),
(4, Array(1,2))
)).toDF("user", "arr");
val two = sc.parallelize(List(
(1, Array(1, 2, 3)),
(2, Array(1,2 ,3, 4)),
(3, Array(1, 2, 3)),
(4, Array(1))
)).toDF("user", "arr");
// usage
one.join(two, one("user") === two("user"))
.select (one("user"), intersect(one("arr"), two("arr")).as("intersect"))
.where(col("intersect") > 2).show
// version from comment
one.join(two)
.select (one("user"), two("user"), intersect(one("arr"), two("arr")).as("intersect")).
where('intersect > 2).show