Spark Streaming使用Receiver机制消费Kafka时,任务并发度如何设置?

(本文基于Spark 2.1.1、Kafka 0.10.2、Scala 2.11.8、Zookeeper 3.4.9、Kafka-manager-1.3.0.7)

利用Receiver机制接收数据,需要加载spark-streaming-kafka-0-8_2.11-2.1.1.jar依赖包

 groupId = org.apache.spark
 artifactId = spark-streaming-kafka-0-8_2.11
 version = 2.1.1

我们先写一段Narrow Dependency的代码,只使用KafkaUtils.createStream创建一个Receiver,来看看如何调整任务并发度,并逐一解释:

/**
 * Created by wangliang on 2017/5/18.
 */
/*
 * Licensed to the Apache Software Foundation (ASF) under one or more
 * contributor license agreements.  See the NOTICE file distributed with
 * this work for additional information regarding copyright ownership.
 * The ASF licenses this file to You under the Apache License, Version 2.0
 * (the "License"); you may not use this file except in compliance with
 * the License.  You may obtain a copy of the License at
 *
 *    http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

import java.util.Arrays;
import java.util.Iterator;
import java.util.Map;
import java.util.HashMap;
import java.util.regex.Pattern;
import java.util.List;
import java.util.ArrayList;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.function.*;
import scala.Tuple2;

import org.apache.spark.SparkConf;
import org.apache.spark.streaming.Duration;
import org.apache.spark.streaming.api.java.JavaDStream;
import org.apache.spark.streaming.api.java.JavaPairDStream;
import org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream;
import org.apache.spark.streaming.api.java.JavaStreamingContext;
import org.apache.spark.streaming.kafka.KafkaUtils;

/**
 * Consumes messages from one or more topics in Kafka and does wordcount.
 *
 * Usage: app_kafka_receiver_spark <zkQuorum> <group> <topics> <numThreads>
 *   <zkQuorum> is a list of one or more zookeeper servers that make quorum
 *   <group> is the name of kafka consumer group
 *   <topics> is a list of one or more kafka topics to consume from
 *   <numThreads> is the number of threads the kafka consumer should use
 */

public final class app_kafka_receiver_spark {
    private static final Pattern SPACE = Pattern.compile(" ");

    private app_kafka_receiver_spark() {
    }

    public static void main(String[] args) throws Exception {
        if (args.length < 4) {
            System.err.println("Usage: app_kafka_receiver_spark <zkQuorum> <group> <topics> <numThreads>");
            System.exit(1);
        }

        org.apache.log4j.Logger.getRootLogger().setLevel(org.apache.log4j.Level.toLevel("WARN"));

        SparkConf sparkConf = new SparkConf().setAppName("app_kafka_receiver_spark");
        // Create the context with 4 seconds batch size
        JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, new Duration(4000));

        System.out.println("spark.default.parallelism is " + jssc.sparkContext().defaultParallelism());

        int numThreads = Integer.parseInt(args[3]);
        Map<String, Integer> topicMap = new HashMap<>();
        String[] topics = args[2].split(",");
        for (String topic: topics) {
            topicMap.put(topic, numThreads);
        }

        JavaPairReceiverInputDStream<String, String> messages =
                KafkaUtils.createStream(jssc, args[0], args[1], topicMap);

        JavaDStream<String> lines = messages.map(new Function<Tuple2<String, String>, String>() {
            @Override
            public String call(Tuple2<String, String> tuple2) {
                return tuple2._2();
            }
        });

        JavaDStream<String> words = lines.flatMap(new FlatMapFunction<String, String>() {
            @Override
            public Iterator<String> call(String x) {
                return Arrays.asList(SPACE.split(x)).iterator();
            }
        });

        words.foreachRDD(new VoidFunction<JavaRDD<String>>() {
            @Override
            public void call(JavaRDD<String> stringJavaRDD) throws Exception {
                System.out.println("the RDD count is " + stringJavaRDD.count());
            }
        });

        jssc.start();
        jssc.awaitTermination();
    }
}

提交应用程序:

spark-submit --class app_kafka_receiver_spark --master spark://wl1:7077 app_051801.jar wl1:2181 group1 051801 4
参数说明:
<zkQuorum>   = "wl1:2181" is a list of one or more zookeeper servers that make quorum
<group>      = "group1"   is the name of kafka consumer group
<topics>     = "051801"   is a list of one or more kafka topics to consume from
<numThreads> = "4"        is the number of threads the kafka consumer should use

重点介绍一下numThreads参数:

它的作用是执行Receiver的Task使用几个线程去Topic对应分区fetch消息,比如:
某话题拥有10个分区,如果你的消费者应用程序只配置一个线程对这个话题进行读取,那么这个线程将从10个分区中进行读取。
同上,你配置5个线程,那么每个线程都会从2个分区中进行读取。
同上,你配置10个线程,那么每个线程都会负责1个分区的读取。
同上,你配置多达14个线程。那么这14个线程中的10个将平分10个分区的读取工作,剩下的4个将会被闲置。
因为一个Receiver仅会占用一个core,所以这里设置的线程数并不会实际增加读取消息的吞吐量,只是串行读取。且该线程数也不会影响Stage中的任务并发数。
  • 可以看到,172.16.2.132这个Executor当作Receiver

    《Spark Streaming使用Receiver机制消费Kafka时,任务并发度如何设置?》 Driver Web UI Streaming页面

  • 可以看到,这个Receiver,使用了1个Core

    《Spark Streaming使用Receiver机制消费Kafka时,任务并发度如何设置?》 Driver Web UI Executors页面

  • 通过Kafka-manager来观察消费者情况,可以看到,group1_wl4-1495174597611-34b932d(就是172.16.2.132这个Receiver)使用了4个线程来消费051801这个Topic的8个分区,每个线程对应2个Topic分区

    《Spark Streaming使用Receiver机制消费Kafka时,任务并发度如何设置?》 通过Kafka-manager来观察消费者情况

  • 可以看到,当读取数据后,Spark使用了40个tasks来处理,直接证明了任务的并发度和上面的numThreads参数没有关联。同时,环境中的spark-defaults.conf里spark.default.parallelism为8,所以也证明了任务的并发度spark.default.parallelism参数没有关联(那和什么有关呢?继续往后看吧)

    《Spark Streaming使用Receiver机制消费Kafka时,任务并发度如何设置?》 Driver Web UI Jobs页面

前面代码中设置了Batch的间隔是4000ms,每Batch的数据构成一个RDD

// Create the context with 4 seconds batch size
JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, new Duration(4000));

环境中的spark-defaults.conf有如下配置

spark.streaming.blockInterval       100
spark.streaming.blockQueueSize      10

因为Receiver机制是把Kafka的消息用Block的方式拉取到每个Executor的BlockManager管理的堆内存中,所以每次能处理多少个Blocks就是该RDD的分区数,也就是任务并发度

  • Task parallelism = (Batch Interval) / (spark.streaming.blockInterval)
    上图中的Stages的40个Tasks就是从此得来。

  • spark.streaming.blockQueueSize的作用是设置存放Blocks的队列大小,其值会影响到每Batch执行或调度的延时,不会直接影响任务并发度,不过会因为队列阻塞,间接影响任务并发度

这里说点题外话,简单说说receiver的选举规则:

目前并没有提供可以指定某个节点充当Receiver角色的方法,Spark会根据最少负载原则,选择资源最丰富(比如,没有启动Task,或者正在执行的Task最少,或者未使用的core最多)的节点来执行Receiver;如果各节点的资源都一样空闲,则会使用Round-Robin的方式来选择节点执行Receiver。
所以,你并不能很准确的预测到哪个节点来充当Receiver角色。

前面说过,使用numThreads参数并不能增加Spark Streaming获取消息的吞吐量,那么,想提高获取消息的吞吐量应该怎么办呢?其相应的任务并发度又会出现如何的变化?
这个时候,就需要使用多个Receiver的方式,来并发的从Kafka获取消息,增加消息的吞吐量,代码如下:

/**
 * Created by wangliang on 2017/5/18.
 */
/*
 * Licensed to the Apache Software Foundation (ASF) under one or more
 * contributor license agreements.  See the NOTICE file distributed with
 * this work for additional information regarding copyright ownership.
 * The ASF licenses this file to You under the Apache License, Version 2.0
 * (the "License"); you may not use this file except in compliance with
 * the License.  You may obtain a copy of the License at
 *
 *    http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

import java.util.Arrays;
import java.util.Iterator;
import java.util.Map;
import java.util.HashMap;
import java.util.regex.Pattern;
import java.util.List;
import java.util.ArrayList;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.function.*;
import scala.Tuple2;

import org.apache.spark.SparkConf;
import org.apache.spark.streaming.Duration;
import org.apache.spark.streaming.api.java.JavaDStream;
import org.apache.spark.streaming.api.java.JavaPairDStream;
import org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream;
import org.apache.spark.streaming.api.java.JavaStreamingContext;
import org.apache.spark.streaming.kafka.KafkaUtils;

/**
 * Consumes messages from one or more topics in Kafka and does wordcount.
 *
 * Usage: app_kafka_mul_receiver_spark <zkQuorum> <group> <topics> <numThreads>
 *   <zkQuorum> is a list of one or more zookeeper servers that make quorum
 *   <group> is the name of kafka consumer group
 *   <topics> is a list of one or more kafka topics to consume from
 *   <numThreads> is the number of threads the kafka consumer should use
 */

public final class app_kafka_mul_receiver_spark {
    private static final Pattern SPACE = Pattern.compile(" ");

    private app_kafka_mul_receiver_spark() {
    }

    public static void main(String[] args) throws Exception {
        if (args.length < 4) {
            System.err.println("Usage: app_kafka_mul_receiver_spark <zkQuorum> <group> <topics> <numThreads>");
            System.exit(1);
        }

        org.apache.log4j.Logger.getRootLogger().setLevel(org.apache.log4j.Level.toLevel("WARN"));

        SparkConf sparkConf = new SparkConf().setAppName("app_kafka_mul_receiver_spark");
        // Create the context with 4 seconds batch size
        JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, new Duration(4000));

        int numThreads = Integer.parseInt(args[3]);
        Map<String, Integer> topicMap = new HashMap<>();
        String[] topics = args[2].split(",");
        for (String topic: topics) {
            topicMap.put(topic, numThreads);
        }

        int numStreams = 4;
        List<JavaPairDStream<String, String>> kafkaStreams = new ArrayList<JavaPairDStream<String, String>>(numStreams);
        for (int i = 0; i < numStreams; i++) {
            kafkaStreams.add(KafkaUtils.createStream(jssc, args[0], args[1], topicMap));
        }
        JavaPairDStream<String, String>unifiedStream = jssc.union(kafkaStreams.get(0), kafkaStreams.subList(1, kafkaStreams.size()));
        
        JavaDStream<String> lines = unifiedStream.map(new Function<Tuple2<String, String>, String>() {
            @Override
            public String call(Tuple2<String, String> tuple2) {
                return tuple2._2();
            }
        });

        JavaDStream<String> words = lines.flatMap(new FlatMapFunction<String, String>() {
            @Override
            public Iterator<String> call(String x) {
                return Arrays.asList(SPACE.split(x)).iterator();
            }
        });

        words.foreachRDD(new VoidFunction<JavaRDD<String>>() {
            @Override
            public void call(JavaRDD<String> stringJavaRDD) throws Exception {
                System.out.println("the RDD count is " + stringJavaRDD.count());
            }
        });

        jssc.start();
        jssc.awaitTermination();
    }
}

从代码中我们看到,建立了4个Receiver,然后通过union算子生成一个新的DStream

int numStreams = 4;
List<JavaPairDStream<String, String>> kafkaStreams = new ArrayList<JavaPairDStream<String, String>>(numStreams);
for (int i = 0; i < numStreams; i++) {
    kafkaStreams.add(KafkaUtils.createStream(jssc, args[0], args[1], topicMap));
}
JavaPairDStream<String, String>unifiedStream = jssc.union(kafkaStreams.get(0), kafkaStreams.subList(1, kafkaStreams.size()));

提交应用程序:

spark-submit --class app_kafka_mul_receiver_spark --master spark://wl1:7077 app_051801.jar wl1:2181 group1 051801 1
参数说明:
<zkQuorum>   = "wl1:2181" is a list of one or more zookeeper servers that make quorum
<group>      = "group1"   is the name of kafka consumer group
<topics>     = "051801"   is a list of one or more kafka topics to consume from
<numThreads> = "1"        is the number of threads the kafka consumer should use
  • 可以看到,启动了4个Receiver,分配给了4个节点上的Executor Task

    《Spark Streaming使用Receiver机制消费Kafka时,任务并发度如何设置?》 Driver Web UI Streaming页面

  • 通过Kafka Manager可以看到,启动了4个Receiver,每个Receiver的1个线程对应2个Topic分区

    《Spark Streaming使用Receiver机制消费Kafka时,任务并发度如何设置?》 Kafka-Manager Consumers页面

  • 效果如下,可以看到,1个Stage对应了差不多160个Tasks。前面说过,1个Recceiver时,计算处理的任务并发数是40,那么,4个Receiver对应的任务并发数,就是 4 * 40 = 160,公式为如下

 Task parallelism = 
((Batch Interval) / (spark.streaming.blockInterval))* Receiver Number

《Spark Streaming使用Receiver机制消费Kafka时,任务并发度如何设置?》 Driver Web UI Jobs页面

细心的同学可能发现上图每个Stage的任务数都是150多,还没有达到160,这里解释一下,如果生产者消息传递的过慢或者Block队列阻塞等原因,可能会出现Block内没有数据或者Block数没有预计那么多,所以,有的时候会发现Stage中Task数并不是设定的那么多,这无伤大雅。

如何调整多个Receiver union后的任务并发度呢?

  • 使用reparition算子,生成使用新分区的DStream
TransformationMeaning
repartition(numPartitions)Changes the level of parallelism in this DStream by creating more or fewer partitions.

注意,使用reparition算子增加或者减少分区数,都会产生Shuffle过程。

如下使用repartition算子,设置分区数为30:

JavaPairDStream<String, String>unifiedStream = jssc.union(kafkaStreams.get(0), kafkaStreams.subList(1, kafkaStreams.size()));

JavaPairDStream<String, String>reunifiedStream = unifiedStream.repartition(30);

JavaDStream<String> lines = reunifiedStream.map(new Function<Tuple2<String, String>, String>() {
            @Override
            public String call(Tuple2<String, String> tuple2) {
                return tuple2._2();
            }
        });

效果如下,可以看出,repartition转换操作产生了宽依赖,生成了Shuffle Stage 67,该Stage的任务并发度为设定的30:

《Spark Streaming使用Receiver机制消费Kafka时,任务并发度如何设置?》 Driver Web UI Stages页面

ok,后面我们介绍一下,如果使用了类似reduceByKey算子,产生了宽依赖,Shuffle Stage的任务并发度是怎样的?

TransformationMeaning
reduceByKey(func, [numTasks])When called on a DStream of (K, V) pairs, return a new DStream of (K, V) pairs where the values for each key are aggregated using the given reduce function. Note: By default, this uses Spark’s default number of parallel tasks (2 for local mode, and in cluster mode the number is determined by the config property spark.default.parallelism) to do the grouping. You can pass an optional numTasks argument to set a different number of tasks.

可以看到,有三种方式影响Shuffle阶段的任务并发度

  • 没有配置spark.default.parallelism时,使用默认值(集群模式时,默认并发度为所有Executor总core数)
  • 使用spark.default.parallelism
  • 使用numTasks参数

我采用了使用numTasks参数的方式,设置Shuffle Stage阶段的任务并发度为9:

JavaPairDStream<String, Integer> wordCounts = words.mapToPair(
                new PairFunction<String, String, Integer>() {
                    @Override
                    public Tuple2<String, Integer> call(String s) {
                        return new Tuple2<>(s, 1);
                    }
                }).reduceByKey(new Function2<Integer, Integer, Integer>() {
            @Override
            public Integer call(Integer i1, Integer i2) {
                return i1 + i2;
            }
        }, 9);

效果如下,可以看到Shuffle Stage阶段的任务并发度为设定的9:

《Spark Streaming使用Receiver机制消费Kafka时,任务并发度如何设置?》 Driver Web UI Stages页面

    原文作者:俺是亮哥
    原文地址: https://www.jianshu.com/p/c8669261165a
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞