Spark
采用Lineage
(书里叫血统)和CheckPoint
(检查点)两种方式来解决分布式数据集中的容错问题。Lineage
本质上类似于数据库的重做日志(redo log
),只不过这个日志粒度很大,是对整个RDD
分区做重做进而恢复数据的。
在容错机制中,如果集群中一个节点死机了,而且运算窄依赖,则只需要把丢失的父RDD
分区重算即可,不依赖于其他节点。但对宽依赖,则需要父RDD
的所有分区都重算,这个代价就很昂贵了。因此,Spark
提供设置检查点的方式来保存Shuffle
前的祖先RDD
数据,将依赖关系删除。当数据丢失时,直接从检查点中恢复数据。为了确保检查点不会因为节点死机而丢失,检查点数据保存在磁盘中,通常是hdfs
文件。
官方建议,做检查点的RDD
最好是已缓存在内存中,否则保存检查点的过程还需要重新计算,产生I/O
开销。
下面通过demo9
演示如何设置和使用检查点
package com.yzy.spark;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.PairFlatMapFunction;
import scala.Tuple2;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
public class demo9 {
private static String appName = "spark.demo";
private static String master = "local[*]";
public static void main(String[] args) {
JavaSparkContext sc = null;
try {
//初始化 JavaSparkContext
SparkConf conf = new SparkConf().setAppName(appName).setMaster(master);
sc = new JavaSparkContext(conf);
//设置检查点存放目录,window为例
sc.setCheckpointDir("E:\\check");
//从test.txt 构建rdd
JavaRDD<String> rdd = sc.textFile("test.txt");
JavaPairRDD<String, Integer> pairRDD = rdd.flatMapToPair(new PairFlatMapFunction<String, String, Integer>() {
public Iterator<Tuple2<String, Integer>> call(String s) throws Exception {
List<Tuple2<String, Integer>> list = new ArrayList<Tuple2<String, Integer>>();
String[] arr = s.split("\\s");
for (String ele : arr) {
list.add(new Tuple2<String, Integer>(ele, 1));
}
return list.iterator();
}
}).cache();
//为pairRDD设置检查点
pairRDD.checkpoint();
System.out.println("isCheckpointed:" + pairRDD.isCheckpointed());
System.out.println("checkpoint:" + pairRDD.getCheckpointFile());
pairRDD.collect();
System.out.println("isCheckpointed:" + pairRDD.isCheckpointed());
System.out.println("checkpoint:" + pairRDD.getCheckpointFile());
} catch (Exception e) {
e.printStackTrace();
} finally {
if (sc != null) {
sc.close();
}
}
}
}
输出结果:
isCheckpointed:false
checkpoint:Optional.empty
isCheckpointed:true
checkpoint:Optional[file:/E:/check/6c933408-176a-4117-bfb1-6172b510e7be/rdd-2]
结果显示,第一次打印检查点是空,这是因为此时还没有执行Action
算子,RDD
没有开始计算,所以自然没有数据被记录。执行collect
函数,这时候就可以看到已经正确设置了检查点了。
demo10
是在Spark Streaming
中使用检查点的示例
package com.yzy.spark;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.function.FlatMapFunction;
import org.apache.spark.api.java.function.Function0;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.PairFunction;
import org.apache.spark.streaming.Durations;
import org.apache.spark.streaming.api.java.JavaDStream;
import org.apache.spark.streaming.api.java.JavaPairDStream;
import org.apache.spark.streaming.api.java.JavaReceiverInputDStream;
import org.apache.spark.streaming.api.java.JavaStreamingContext;
import scala.Tuple2;
import java.util.Arrays;
import java.util.Iterator;
public class demo10 {
private static String appName = "spark.streaming.demo";
private static String master = "local[*]";
private static String host = "localhost";
private static int port = 9999;
public static void main(String[] args) {
String checkpointDir = "E:\\check";
JavaStreamingContext ssc = JavaStreamingContext.getOrCreate(checkpointDir, createContext(appName, checkpointDir));
//开始作业
ssc.start();
try {
ssc.awaitTermination();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
public static Function0<JavaStreamingContext> createContext(final String appName, final String checkpointDir) {
return new Function0<JavaStreamingContext>() {
@Override
public JavaStreamingContext call() throws Exception {
//初始化sparkConf
SparkConf sparkConf = SparkConfig.getSparkConf().setMaster(master).setAppName(appName);
//获得JavaStreamingContext
JavaStreamingContext ssc = new JavaStreamingContext(sparkConf, Durations.seconds(3));
ssc.checkpoint(checkpointDir);
//从socket源获取数据
JavaReceiverInputDStream<String> lines = ssc.socketTextStream(host, port);
//拆分行成单词
JavaDStream<String> words = lines.flatMap(new FlatMapFunction<String, String>() {
public Iterator<String> call(String s) throws Exception {
return Arrays.asList(s.split(" ")).iterator();
}
});
//转化成<K,V>
JavaPairDStream<String, Integer> pairs = words.mapToPair(new PairFunction<String, String, Integer>() {
public Tuple2<String, Integer> call(String s) throws Exception {
return new Tuple2<String, Integer>(s, 1);
}
}).cache();
JavaPairDStream<String, Integer> wordCounts = pairs.reduceByKey(new Function2<Integer, Integer, Integer>() {
public Integer call(Integer integer, Integer integer2) throws Exception {
return integer + integer2;
}
});
wordCounts.print();
return ssc;
}
};
}
}
在E:/check
目录下能看到运行过程中生成的检查点文件。